23:10:59 Started by timer 23:10:59 Running as SYSTEM 23:10:59 [EnvInject] - Loading node environment variables. 23:10:59 Building remotely on prd-ubuntu1804-docker-8c-8g-9933 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:59 [ssh-agent] Looking for ssh-agent implementation... 23:10:59 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:59 $ ssh-agent 23:10:59 SSH_AUTH_SOCK=/tmp/ssh-nkGmbJUOXrj8/agent.2077 23:10:59 SSH_AGENT_PID=2079 23:10:59 [ssh-agent] Started. 23:10:59 Running ssh-add (command line suppressed) 23:10:59 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5468798684531083163.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5468798684531083163.key) 23:10:59 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:59 The recommended git tool is: NONE 23:11:01 using credential onap-jenkins-ssh 23:11:01 Wiping out workspace first. 23:11:01 Cloning the remote Git repository 23:11:01 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:01 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:01 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:01 > git --version # timeout=10 23:11:01 > git --version # 'git version 2.17.1' 23:11:01 using GIT_SSH to set credentials Gerrit user 23:11:01 Verifying host key using manually-configured host key entries 23:11:01 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:01 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:01 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:02 Avoid second fetch 23:11:02 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:02 Checking out Revision 5582cd406c8414919c4d5d7f5b116f4f1e5a971d (refs/remotes/origin/master) 23:11:02 > git config core.sparsecheckout # timeout=10 23:11:02 > git checkout -f 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=30 23:11:02 Commit message: "Merge "Add ACM regression test suite"" 23:11:02 > git rev-list --no-walk 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=10 23:11:02 provisioning config files... 23:11:02 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:02 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:02 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins990288347304115897.sh 23:11:02 ---> python-tools-install.sh 23:11:02 Setup pyenv: 23:11:02 * system (set by /opt/pyenv/version) 23:11:02 * 3.8.13 (set by /opt/pyenv/version) 23:11:02 * 3.9.13 (set by /opt/pyenv/version) 23:11:02 * 3.10.6 (set by /opt/pyenv/version) 23:11:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ohSB 23:11:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:10 lf-activate-venv(): INFO: Installing: lftools 23:11:43 lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH 23:11:43 Generating Requirements File 23:12:12 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:12:12 lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. 23:12:12 Python 3.10.6 23:12:12 pip 24.0 from /tmp/venv-ohSB/lib/python3.10/site-packages/pip (python 3.10) 23:12:13 appdirs==1.4.4 23:12:13 argcomplete==3.2.2 23:12:13 aspy.yaml==1.3.0 23:12:13 attrs==23.2.0 23:12:13 autopage==0.5.2 23:12:13 beautifulsoup4==4.12.3 23:12:13 boto3==1.34.53 23:12:13 botocore==1.34.53 23:12:13 bs4==0.0.2 23:12:13 cachetools==5.3.3 23:12:13 certifi==2024.2.2 23:12:13 cffi==1.16.0 23:12:13 cfgv==3.4.0 23:12:13 chardet==5.2.0 23:12:13 charset-normalizer==3.3.2 23:12:13 click==8.1.7 23:12:13 cliff==4.6.0 23:12:13 cmd2==2.4.3 23:12:13 cryptography==3.3.2 23:12:13 debtcollector==3.0.0 23:12:13 decorator==5.1.1 23:12:13 defusedxml==0.7.1 23:12:13 Deprecated==1.2.14 23:12:13 distlib==0.3.8 23:12:13 dnspython==2.6.1 23:12:13 docker==4.2.2 23:12:13 dogpile.cache==1.3.2 23:12:13 email_validator==2.1.1 23:12:13 filelock==3.13.1 23:12:13 future==1.0.0 23:12:13 gitdb==4.0.11 23:12:13 GitPython==3.1.42 23:12:13 google-auth==2.28.1 23:12:13 httplib2==0.22.0 23:12:13 identify==2.5.35 23:12:13 idna==3.6 23:12:13 importlib-resources==1.5.0 23:12:13 iso8601==2.1.0 23:12:13 Jinja2==3.1.3 23:12:13 jmespath==1.0.1 23:12:13 jsonpatch==1.33 23:12:13 jsonpointer==2.4 23:12:13 jsonschema==4.21.1 23:12:13 jsonschema-specifications==2023.12.1 23:12:13 keystoneauth1==5.6.0 23:12:13 kubernetes==29.0.0 23:12:13 lftools==0.37.9 23:12:13 lxml==5.1.0 23:12:13 MarkupSafe==2.1.5 23:12:13 msgpack==1.0.7 23:12:13 multi_key_dict==2.0.3 23:12:13 munch==4.0.0 23:12:13 netaddr==1.2.1 23:12:13 netifaces==0.11.0 23:12:13 niet==1.4.2 23:12:13 nodeenv==1.8.0 23:12:13 oauth2client==4.1.3 23:12:13 oauthlib==3.2.2 23:12:13 openstacksdk==0.62.0 23:12:13 os-client-config==2.1.0 23:12:13 os-service-types==1.7.0 23:12:13 osc-lib==3.0.1 23:12:13 oslo.config==9.4.0 23:12:13 oslo.context==5.5.0 23:12:13 oslo.i18n==6.3.0 23:12:13 oslo.log==5.5.0 23:12:13 oslo.serialization==5.4.0 23:12:13 oslo.utils==7.1.0 23:12:13 packaging==23.2 23:12:13 pbr==6.0.0 23:12:13 platformdirs==4.2.0 23:12:13 prettytable==3.10.0 23:12:13 pyasn1==0.5.1 23:12:13 pyasn1-modules==0.3.0 23:12:13 pycparser==2.21 23:12:13 pygerrit2==2.0.15 23:12:13 PyGithub==2.2.0 23:12:13 pyinotify==0.9.6 23:12:13 PyJWT==2.8.0 23:12:13 PyNaCl==1.5.0 23:12:13 pyparsing==2.4.7 23:12:13 pyperclip==1.8.2 23:12:13 pyrsistent==0.20.0 23:12:13 python-cinderclient==9.4.0 23:12:13 python-dateutil==2.8.2 23:12:13 python-heatclient==3.4.0 23:12:13 python-jenkins==1.8.2 23:12:13 python-keystoneclient==5.3.0 23:12:13 python-magnumclient==4.3.0 23:12:13 python-novaclient==18.4.0 23:12:13 python-openstackclient==6.0.1 23:12:13 python-swiftclient==4.5.0 23:12:13 PyYAML==6.0.1 23:12:13 referencing==0.33.0 23:12:13 requests==2.31.0 23:12:13 requests-oauthlib==1.3.1 23:12:13 requestsexceptions==1.4.0 23:12:13 rfc3986==2.0.0 23:12:13 rpds-py==0.18.0 23:12:13 rsa==4.9 23:12:13 ruamel.yaml==0.18.6 23:12:13 ruamel.yaml.clib==0.2.8 23:12:13 s3transfer==0.10.0 23:12:13 simplejson==3.19.2 23:12:13 six==1.16.0 23:12:13 smmap==5.0.1 23:12:13 soupsieve==2.5 23:12:13 stevedore==5.2.0 23:12:13 tabulate==0.9.0 23:12:13 toml==0.10.2 23:12:13 tomlkit==0.12.4 23:12:13 tqdm==4.66.2 23:12:13 typing_extensions==4.10.0 23:12:13 tzdata==2024.1 23:12:13 urllib3==1.26.18 23:12:13 virtualenv==20.25.1 23:12:13 wcwidth==0.2.13 23:12:13 websocket-client==1.7.0 23:12:13 wrapt==1.16.0 23:12:13 xdg==6.0.0 23:12:13 xmltodict==0.13.0 23:12:13 yq==3.2.3 23:12:13 [EnvInject] - Injecting environment variables from a build step. 23:12:13 [EnvInject] - Injecting as environment variables the properties content 23:12:13 SET_JDK_VERSION=openjdk17 23:12:13 GIT_URL="git://cloud.onap.org/mirror" 23:12:13 23:12:13 [EnvInject] - Variables injected successfully. 23:12:13 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins6917225224165365458.sh 23:12:13 ---> update-java-alternatives.sh 23:12:13 ---> Updating Java version 23:12:13 ---> Ubuntu/Debian system detected 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:14 openjdk version "17.0.4" 2022-07-19 23:12:14 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:14 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:14 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:14 [EnvInject] - Injecting environment variables from a build step. 23:12:14 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:14 [EnvInject] - Variables injected successfully. 23:12:14 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins6422848241316460569.sh 23:12:14 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:14 + set +u 23:12:14 + save_set 23:12:14 + RUN_CSIT_SAVE_SET=ehxB 23:12:14 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:14 + '[' 1 -eq 0 ']' 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + export ROBOT_VARIABLES= 23:12:14 + ROBOT_VARIABLES= 23:12:14 + export PROJECT=pap 23:12:14 + PROJECT=pap 23:12:14 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:14 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:14 + relax_set 23:12:14 + set +e 23:12:14 + set +o pipefail 23:12:14 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 +++ mktemp -d 23:12:14 ++ ROBOT_VENV=/tmp/tmp.W4wZtNCRb3 23:12:14 ++ echo ROBOT_VENV=/tmp/tmp.W4wZtNCRb3 23:12:14 +++ python3 --version 23:12:14 ++ echo 'Python version is: Python 3.6.9' 23:12:14 Python version is: Python 3.6.9 23:12:14 ++ python3 -m venv --clear /tmp/tmp.W4wZtNCRb3 23:12:15 ++ source /tmp/tmp.W4wZtNCRb3/bin/activate 23:12:15 +++ deactivate nondestructive 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ '[' -n /bin/bash -o -n '' ']' 23:12:15 +++ hash -r 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ unset VIRTUAL_ENV 23:12:15 +++ '[' '!' nondestructive = nondestructive ']' 23:12:15 +++ VIRTUAL_ENV=/tmp/tmp.W4wZtNCRb3 23:12:15 +++ export VIRTUAL_ENV 23:12:15 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:15 +++ PATH=/tmp/tmp.W4wZtNCRb3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:15 +++ export PATH 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ '[' -z '' ']' 23:12:15 +++ _OLD_VIRTUAL_PS1= 23:12:15 +++ '[' 'x(tmp.W4wZtNCRb3) ' '!=' x ']' 23:12:15 +++ PS1='(tmp.W4wZtNCRb3) ' 23:12:15 +++ export PS1 23:12:15 +++ '[' -n /bin/bash -o -n '' ']' 23:12:15 +++ hash -r 23:12:15 ++ set -exu 23:12:15 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:18 ++ echo 'Installing Python Requirements' 23:12:18 Installing Python Requirements 23:12:18 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:36 ++ python3 -m pip -qq freeze 23:12:37 bcrypt==4.0.1 23:12:37 beautifulsoup4==4.12.3 23:12:37 bitarray==2.9.2 23:12:37 certifi==2024.2.2 23:12:37 cffi==1.15.1 23:12:37 charset-normalizer==2.0.12 23:12:37 cryptography==40.0.2 23:12:37 decorator==5.1.1 23:12:37 elasticsearch==7.17.9 23:12:37 elasticsearch-dsl==7.4.1 23:12:37 enum34==1.1.10 23:12:37 idna==3.6 23:12:37 importlib-resources==5.4.0 23:12:37 ipaddr==2.2.0 23:12:37 isodate==0.6.1 23:12:37 jmespath==0.10.0 23:12:37 jsonpatch==1.32 23:12:37 jsonpath-rw==1.4.0 23:12:37 jsonpointer==2.3 23:12:37 lxml==5.1.0 23:12:37 netaddr==0.8.0 23:12:37 netifaces==0.11.0 23:12:37 odltools==0.1.28 23:12:37 paramiko==3.4.0 23:12:37 pkg_resources==0.0.0 23:12:37 ply==3.11 23:12:37 pyang==2.6.0 23:12:37 pyangbind==0.8.1 23:12:37 pycparser==2.21 23:12:37 pyhocon==0.3.60 23:12:37 PyNaCl==1.5.0 23:12:37 pyparsing==3.1.1 23:12:37 python-dateutil==2.8.2 23:12:37 regex==2023.8.8 23:12:37 requests==2.27.1 23:12:37 robotframework==6.1.1 23:12:37 robotframework-httplibrary==0.4.2 23:12:37 robotframework-pythonlibcore==3.0.0 23:12:37 robotframework-requests==0.9.4 23:12:37 robotframework-selenium2library==3.0.0 23:12:37 robotframework-seleniumlibrary==5.1.3 23:12:37 robotframework-sshlibrary==3.8.0 23:12:37 scapy==2.5.0 23:12:37 scp==0.14.5 23:12:37 selenium==3.141.0 23:12:37 six==1.16.0 23:12:37 soupsieve==2.3.2.post1 23:12:37 urllib3==1.26.18 23:12:37 waitress==2.0.0 23:12:37 WebOb==1.8.7 23:12:37 WebTest==3.0.0 23:12:37 zipp==3.6.0 23:12:37 ++ mkdir -p /tmp/tmp.W4wZtNCRb3/src/onap 23:12:37 ++ rm -rf /tmp/tmp.W4wZtNCRb3/src/onap/testsuite 23:12:37 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:42 ++ echo 'Installing python confluent-kafka library' 23:12:42 Installing python confluent-kafka library 23:12:42 ++ python3 -m pip install -qq confluent-kafka 23:12:44 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:44 Uninstall docker-py and reinstall docker. 23:12:44 ++ python3 -m pip uninstall -y -qq docker 23:12:44 ++ python3 -m pip install -U -qq docker 23:12:45 ++ python3 -m pip -qq freeze 23:12:46 bcrypt==4.0.1 23:12:46 beautifulsoup4==4.12.3 23:12:46 bitarray==2.9.2 23:12:46 certifi==2024.2.2 23:12:46 cffi==1.15.1 23:12:46 charset-normalizer==2.0.12 23:12:46 confluent-kafka==2.3.0 23:12:46 cryptography==40.0.2 23:12:46 decorator==5.1.1 23:12:46 deepdiff==5.7.0 23:12:46 dnspython==2.2.1 23:12:46 docker==5.0.3 23:12:46 elasticsearch==7.17.9 23:12:46 elasticsearch-dsl==7.4.1 23:12:46 enum34==1.1.10 23:12:46 future==1.0.0 23:12:46 idna==3.6 23:12:46 importlib-resources==5.4.0 23:12:46 ipaddr==2.2.0 23:12:46 isodate==0.6.1 23:12:46 Jinja2==3.0.3 23:12:46 jmespath==0.10.0 23:12:46 jsonpatch==1.32 23:12:46 jsonpath-rw==1.4.0 23:12:46 jsonpointer==2.3 23:12:46 kafka-python==2.0.2 23:12:46 lxml==5.1.0 23:12:46 MarkupSafe==2.0.1 23:12:46 more-itertools==5.0.0 23:12:46 netaddr==0.8.0 23:12:46 netifaces==0.11.0 23:12:46 odltools==0.1.28 23:12:46 ordered-set==4.0.2 23:12:46 paramiko==3.4.0 23:12:46 pbr==6.0.0 23:12:46 pkg_resources==0.0.0 23:12:46 ply==3.11 23:12:46 protobuf==3.19.6 23:12:46 pyang==2.6.0 23:12:46 pyangbind==0.8.1 23:12:46 pycparser==2.21 23:12:46 pyhocon==0.3.60 23:12:46 PyNaCl==1.5.0 23:12:46 pyparsing==3.1.1 23:12:46 python-dateutil==2.8.2 23:12:46 PyYAML==6.0.1 23:12:46 regex==2023.8.8 23:12:46 requests==2.27.1 23:12:46 robotframework==6.1.1 23:12:46 robotframework-httplibrary==0.4.2 23:12:46 robotframework-onap==0.6.0.dev105 23:12:46 robotframework-pythonlibcore==3.0.0 23:12:46 robotframework-requests==0.9.4 23:12:46 robotframework-selenium2library==3.0.0 23:12:46 robotframework-seleniumlibrary==5.1.3 23:12:46 robotframework-sshlibrary==3.8.0 23:12:46 robotlibcore-temp==1.0.2 23:12:46 scapy==2.5.0 23:12:46 scp==0.14.5 23:12:46 selenium==3.141.0 23:12:46 six==1.16.0 23:12:46 soupsieve==2.3.2.post1 23:12:46 urllib3==1.26.18 23:12:46 waitress==2.0.0 23:12:46 WebOb==1.8.7 23:12:46 websocket-client==1.3.1 23:12:46 WebTest==3.0.0 23:12:46 zipp==3.6.0 23:12:46 ++ uname 23:12:46 ++ grep -q Linux 23:12:46 ++ sudo apt-get -y -qq install libxml2-utils 23:12:46 + load_set 23:12:46 + _setopts=ehuxB 23:12:46 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:46 ++ tr : ' ' 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o braceexpand 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o hashall 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o interactive-comments 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o nounset 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o xtrace 23:12:46 ++ echo ehuxB 23:12:46 ++ sed 's/./& /g' 23:12:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:46 + set +e 23:12:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:46 + set +h 23:12:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:46 + set +u 23:12:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:46 + set +x 23:12:46 + source_safely /tmp/tmp.W4wZtNCRb3/bin/activate 23:12:46 + '[' -z /tmp/tmp.W4wZtNCRb3/bin/activate ']' 23:12:46 + relax_set 23:12:46 + set +e 23:12:46 + set +o pipefail 23:12:46 + . /tmp/tmp.W4wZtNCRb3/bin/activate 23:12:46 ++ deactivate nondestructive 23:12:46 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:46 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:46 ++ export PATH 23:12:46 ++ unset _OLD_VIRTUAL_PATH 23:12:46 ++ '[' -n '' ']' 23:12:46 ++ '[' -n /bin/bash -o -n '' ']' 23:12:46 ++ hash -r 23:12:46 ++ '[' -n '' ']' 23:12:46 ++ unset VIRTUAL_ENV 23:12:46 ++ '[' '!' nondestructive = nondestructive ']' 23:12:46 ++ VIRTUAL_ENV=/tmp/tmp.W4wZtNCRb3 23:12:46 ++ export VIRTUAL_ENV 23:12:46 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:46 ++ PATH=/tmp/tmp.W4wZtNCRb3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:46 ++ export PATH 23:12:46 ++ '[' -n '' ']' 23:12:46 ++ '[' -z '' ']' 23:12:46 ++ _OLD_VIRTUAL_PS1='(tmp.W4wZtNCRb3) ' 23:12:46 ++ '[' 'x(tmp.W4wZtNCRb3) ' '!=' x ']' 23:12:46 ++ PS1='(tmp.W4wZtNCRb3) (tmp.W4wZtNCRb3) ' 23:12:46 ++ export PS1 23:12:46 ++ '[' -n /bin/bash -o -n '' ']' 23:12:46 ++ hash -r 23:12:46 + load_set 23:12:46 + _setopts=hxB 23:12:46 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:46 ++ tr : ' ' 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o braceexpand 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o hashall 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o interactive-comments 23:12:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:46 + set +o xtrace 23:12:46 ++ echo hxB 23:12:46 ++ sed 's/./& /g' 23:12:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:46 + set +h 23:12:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:46 + set +x 23:12:46 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:46 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:46 + export TEST_OPTIONS= 23:12:46 + TEST_OPTIONS= 23:12:46 ++ mktemp -d 23:12:46 + WORKDIR=/tmp/tmp.yQgLqrqYzc 23:12:46 + cd /tmp/tmp.yQgLqrqYzc 23:12:46 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:46 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:46 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:46 Configure a credential helper to remove this warning. See 23:12:46 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:46 23:12:46 Login Succeeded 23:12:46 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:46 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:46 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:46 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:46 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:46 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:46 + relax_set 23:12:46 + set +e 23:12:46 + set +o pipefail 23:12:46 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:46 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:46 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:46 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:46 +++ GERRIT_BRANCH=master 23:12:46 +++ echo GERRIT_BRANCH=master 23:12:46 GERRIT_BRANCH=master 23:12:46 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:46 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:46 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:46 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:47 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:47 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:47 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:47 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:47 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:47 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:47 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:47 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:47 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:47 +++ grafana=false 23:12:47 +++ gui=false 23:12:47 +++ [[ 2 -gt 0 ]] 23:12:47 +++ key=apex-pdp 23:12:47 +++ case $key in 23:12:47 +++ echo apex-pdp 23:12:47 apex-pdp 23:12:47 +++ component=apex-pdp 23:12:47 +++ shift 23:12:47 +++ [[ 1 -gt 0 ]] 23:12:47 +++ key=--grafana 23:12:47 +++ case $key in 23:12:47 +++ grafana=true 23:12:47 +++ shift 23:12:47 +++ [[ 0 -gt 0 ]] 23:12:47 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:47 +++ echo 'Configuring docker compose...' 23:12:47 Configuring docker compose... 23:12:47 +++ source export-ports.sh 23:12:47 +++ source get-versions.sh 23:12:49 +++ '[' -z pap ']' 23:12:49 +++ '[' -n apex-pdp ']' 23:12:49 +++ '[' apex-pdp == logs ']' 23:12:49 +++ '[' true = true ']' 23:12:49 +++ echo 'Starting apex-pdp application with Grafana' 23:12:49 Starting apex-pdp application with Grafana 23:12:49 +++ docker-compose up -d apex-pdp grafana 23:12:50 Creating network "compose_default" with the default driver 23:12:50 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:50 latest: Pulling from prom/prometheus 23:12:53 Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e 23:12:53 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:53 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:53 latest: Pulling from grafana/grafana 23:12:58 Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 23:12:58 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:12:58 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:12:58 10.10.2: Pulling from mariadb 23:13:03 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:03 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:03 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:03 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:07 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:07 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:07 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:08 latest: Pulling from confluentinc/cp-zookeeper 23:13:19 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:19 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:19 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:21 latest: Pulling from confluentinc/cp-kafka 23:13:24 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:24 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:24 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:24 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:31 Digest: sha256:ed573692302e5a28aa3b51a60adbd7641290e273719edd44bc9ff784d1569efa 23:13:32 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:32 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:35 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:37 Digest: sha256:71cc3c3555fddbd324c5ddec27e24db340b82732d2f6ce50eddcfdf6715a7ab2 23:13:37 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:37 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:37 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:39 Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 23:13:39 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:39 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:39 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:49 Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 23:13:49 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:49 Creating compose_zookeeper_1 ... 23:13:49 Creating prometheus ... 23:13:49 Creating mariadb ... 23:13:49 Creating simulator ... 23:14:01 Creating prometheus ... done 23:14:01 Creating grafana ... 23:14:02 Creating grafana ... done 23:14:03 Creating mariadb ... done 23:14:03 Creating policy-db-migrator ... 23:14:04 Creating simulator ... done 23:14:05 Creating compose_zookeeper_1 ... done 23:14:05 Creating kafka ... 23:14:06 Creating policy-db-migrator ... done 23:14:06 Creating policy-api ... 23:14:07 Creating policy-api ... done 23:14:08 Creating kafka ... done 23:14:08 Creating policy-pap ... 23:14:09 Creating policy-pap ... done 23:14:09 Creating policy-apex-pdp ... 23:14:10 Creating policy-apex-pdp ... done 23:14:10 +++ echo 'Prometheus server: http://localhost:30259' 23:14:10 Prometheus server: http://localhost:30259 23:14:10 +++ echo 'Grafana server: http://localhost:30269' 23:14:10 Grafana server: http://localhost:30269 23:14:10 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:10 ++ sleep 10 23:14:20 ++ unset http_proxy https_proxy 23:14:20 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:20 Waiting for REST to come up on localhost port 30003... 23:14:20 NAMES STATUS 23:14:20 policy-apex-pdp Up 10 seconds 23:14:20 policy-pap Up 11 seconds 23:14:20 policy-api Up 13 seconds 23:14:20 kafka Up 12 seconds 23:14:20 grafana Up 18 seconds 23:14:20 simulator Up 16 seconds 23:14:20 mariadb Up 17 seconds 23:14:20 compose_zookeeper_1 Up 15 seconds 23:14:20 prometheus Up 19 seconds 23:14:25 NAMES STATUS 23:14:25 policy-apex-pdp Up 15 seconds 23:14:25 policy-pap Up 16 seconds 23:14:25 policy-api Up 18 seconds 23:14:25 kafka Up 17 seconds 23:14:25 grafana Up 23 seconds 23:14:25 simulator Up 21 seconds 23:14:25 mariadb Up 22 seconds 23:14:25 compose_zookeeper_1 Up 20 seconds 23:14:25 prometheus Up 24 seconds 23:14:31 NAMES STATUS 23:14:31 policy-apex-pdp Up 20 seconds 23:14:31 policy-pap Up 21 seconds 23:14:31 policy-api Up 23 seconds 23:14:31 kafka Up 22 seconds 23:14:31 grafana Up 28 seconds 23:14:31 simulator Up 26 seconds 23:14:31 mariadb Up 27 seconds 23:14:31 compose_zookeeper_1 Up 25 seconds 23:14:31 prometheus Up 29 seconds 23:14:36 NAMES STATUS 23:14:36 policy-apex-pdp Up 25 seconds 23:14:36 policy-pap Up 26 seconds 23:14:36 policy-api Up 28 seconds 23:14:36 kafka Up 27 seconds 23:14:36 grafana Up 33 seconds 23:14:36 simulator Up 31 seconds 23:14:36 mariadb Up 32 seconds 23:14:36 compose_zookeeper_1 Up 30 seconds 23:14:36 prometheus Up 34 seconds 23:14:41 NAMES STATUS 23:14:41 policy-apex-pdp Up 30 seconds 23:14:41 policy-pap Up 31 seconds 23:14:41 policy-api Up 33 seconds 23:14:41 kafka Up 32 seconds 23:14:41 grafana Up 38 seconds 23:14:41 simulator Up 36 seconds 23:14:41 mariadb Up 37 seconds 23:14:41 compose_zookeeper_1 Up 35 seconds 23:14:41 prometheus Up 39 seconds 23:14:46 NAMES STATUS 23:14:46 policy-apex-pdp Up 35 seconds 23:14:46 policy-pap Up 36 seconds 23:14:46 policy-api Up 38 seconds 23:14:46 kafka Up 37 seconds 23:14:46 grafana Up 43 seconds 23:14:46 simulator Up 41 seconds 23:14:46 mariadb Up 42 seconds 23:14:46 compose_zookeeper_1 Up 40 seconds 23:14:46 prometheus Up 44 seconds 23:14:46 ++ export 'SUITES=pap-test.robot 23:14:46 pap-slas.robot' 23:14:46 ++ SUITES='pap-test.robot 23:14:46 pap-slas.robot' 23:14:46 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:46 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:46 + load_set 23:14:46 + _setopts=hxB 23:14:46 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:46 ++ tr : ' ' 23:14:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:46 + set +o braceexpand 23:14:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:46 + set +o hashall 23:14:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:46 + set +o interactive-comments 23:14:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:46 + set +o xtrace 23:14:46 ++ echo hxB 23:14:46 ++ sed 's/./& /g' 23:14:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:46 + set +h 23:14:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:46 + set +x 23:14:46 + docker_stats 23:14:46 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:46 ++ uname -s 23:14:46 + '[' Linux == Darwin ']' 23:14:46 + sh -c 'top -bn1 | head -3' 23:14:46 top - 23:14:46 up 4 min, 0 users, load average: 3.20, 1.44, 0.58 23:14:46 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:46 %Cpu(s): 14.1 us, 3.0 sy, 0.0 ni, 78.6 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:46 + echo 23:14:46 + sh -c 'free -h' 23:14:46 23:14:46 total used free shared buff/cache available 23:14:46 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:14:46 Swap: 1.0G 0B 1.0G 23:14:46 + echo 23:14:46 23:14:46 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:46 NAMES STATUS 23:14:46 policy-apex-pdp Up 35 seconds 23:14:46 policy-pap Up 36 seconds 23:14:46 policy-api Up 38 seconds 23:14:46 kafka Up 37 seconds 23:14:46 grafana Up 44 seconds 23:14:46 simulator Up 41 seconds 23:14:46 mariadb Up 42 seconds 23:14:46 compose_zookeeper_1 Up 40 seconds 23:14:46 prometheus Up 44 seconds 23:14:46 + echo 23:14:46 + docker stats --no-stream 23:14:46 23:14:49 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:49 e417f0e35287 policy-apex-pdp 2.10% 185.6MiB / 31.41GiB 0.58% 7.21kB / 6.97kB 0B / 0B 48 23:14:49 120b5bfa683b policy-pap 6.74% 528.6MiB / 31.41GiB 1.64% 30.5kB / 32.8kB 0B / 153MB 61 23:14:49 d898c735c037 policy-api 0.11% 539.8MiB / 31.41GiB 1.68% 1MB / 737kB 0B / 0B 56 23:14:49 c7bee733818e kafka 8.50% 378.4MiB / 31.41GiB 1.18% 72.1kB / 74.7kB 0B / 508kB 84 23:14:49 861ec82cea04 grafana 0.03% 57.93MiB / 31.41GiB 0.18% 18.9kB / 3.44kB 0B / 24MB 21 23:14:49 1aa1a47f47a8 simulator 0.07% 125.3MiB / 31.41GiB 0.39% 1.31kB / 0B 0B / 0B 76 23:14:49 99656c7b467a mariadb 0.02% 101.7MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 71.4MB 37 23:14:49 87f646d9b0d2 compose_zookeeper_1 0.18% 98.32MiB / 31.41GiB 0.31% 55.8kB / 49.4kB 0B / 332kB 60 23:14:49 7d01a6da3020 prometheus 0.00% 19.42MiB / 31.41GiB 0.06% 28.6kB / 1.09kB 131kB / 0B 12 23:14:49 + echo 23:14:49 23:14:49 + cd /tmp/tmp.yQgLqrqYzc 23:14:49 + echo 'Reading the testplan:' 23:14:49 Reading the testplan: 23:14:49 + echo 'pap-test.robot 23:14:49 pap-slas.robot' 23:14:49 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:49 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:49 + cat testplan.txt 23:14:49 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:49 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:49 ++ xargs 23:14:49 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:49 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:49 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:49 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:49 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:49 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:49 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:49 + relax_set 23:14:49 + set +e 23:14:49 + set +o pipefail 23:14:49 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:49 ============================================================================== 23:14:49 pap 23:14:49 ============================================================================== 23:14:49 pap.Pap-Test 23:14:49 ============================================================================== 23:14:50 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:14:50 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:50 ------------------------------------------------------------------------------ 23:14:51 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:51 ------------------------------------------------------------------------------ 23:14:51 Healthcheck :: Verify policy pap health check | PASS | 23:14:51 ------------------------------------------------------------------------------ 23:15:12 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:12 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:12 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:12 ------------------------------------------------------------------------------ 23:15:13 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:13 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:13 ------------------------------------------------------------------------------ 23:15:14 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:14 ------------------------------------------------------------------------------ 23:15:14 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:14 ------------------------------------------------------------------------------ 23:15:14 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:14 ------------------------------------------------------------------------------ 23:15:14 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:14 ------------------------------------------------------------------------------ 23:15:15 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:15 ------------------------------------------------------------------------------ 23:15:35 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:35 ------------------------------------------------------------------------------ 23:15:35 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:35 ------------------------------------------------------------------------------ 23:15:35 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:35 ------------------------------------------------------------------------------ 23:15:35 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:35 ------------------------------------------------------------------------------ 23:15:35 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:35 ------------------------------------------------------------------------------ 23:15:35 pap.Pap-Test | PASS | 23:15:35 22 tests, 22 passed, 0 failed 23:15:35 ============================================================================== 23:15:35 pap.Pap-Slas 23:15:35 ============================================================================== 23:16:35 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:35 ------------------------------------------------------------------------------ 23:16:36 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:36 ------------------------------------------------------------------------------ 23:16:36 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:36 ------------------------------------------------------------------------------ 23:16:36 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:36 ------------------------------------------------------------------------------ 23:16:36 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:36 ------------------------------------------------------------------------------ 23:16:36 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:36 ------------------------------------------------------------------------------ 23:16:36 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:36 ------------------------------------------------------------------------------ 23:16:36 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:36 ------------------------------------------------------------------------------ 23:16:36 pap.Pap-Slas | PASS | 23:16:36 8 tests, 8 passed, 0 failed 23:16:36 ============================================================================== 23:16:36 pap | PASS | 23:16:36 30 tests, 30 passed, 0 failed 23:16:36 ============================================================================== 23:16:36 Output: /tmp/tmp.yQgLqrqYzc/output.xml 23:16:36 Log: /tmp/tmp.yQgLqrqYzc/log.html 23:16:36 Report: /tmp/tmp.yQgLqrqYzc/report.html 23:16:36 + RESULT=0 23:16:36 + load_set 23:16:36 + _setopts=hxB 23:16:36 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:36 ++ tr : ' ' 23:16:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:36 + set +o braceexpand 23:16:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:36 + set +o hashall 23:16:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:36 + set +o interactive-comments 23:16:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:36 + set +o xtrace 23:16:36 ++ echo hxB 23:16:36 ++ sed 's/./& /g' 23:16:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:36 + set +h 23:16:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:36 + set +x 23:16:36 + echo 'RESULT: 0' 23:16:36 RESULT: 0 23:16:36 + exit 0 23:16:36 + on_exit 23:16:36 + rc=0 23:16:36 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:36 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:36 NAMES STATUS 23:16:36 policy-apex-pdp Up 2 minutes 23:16:36 policy-pap Up 2 minutes 23:16:36 policy-api Up 2 minutes 23:16:36 kafka Up 2 minutes 23:16:36 grafana Up 2 minutes 23:16:36 simulator Up 2 minutes 23:16:36 mariadb Up 2 minutes 23:16:36 compose_zookeeper_1 Up 2 minutes 23:16:36 prometheus Up 2 minutes 23:16:36 + docker_stats 23:16:36 ++ uname -s 23:16:36 + '[' Linux == Darwin ']' 23:16:36 + sh -c 'top -bn1 | head -3' 23:16:36 top - 23:16:36 up 6 min, 0 users, load average: 0.80, 1.16, 0.57 23:16:36 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:36 %Cpu(s): 11.2 us, 2.3 sy, 0.0 ni, 83.2 id, 3.2 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:36 + echo 23:16:36 23:16:36 + sh -c 'free -h' 23:16:36 total used free shared buff/cache available 23:16:36 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 23:16:36 Swap: 1.0G 0B 1.0G 23:16:36 + echo 23:16:36 23:16:36 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:36 NAMES STATUS 23:16:36 policy-apex-pdp Up 2 minutes 23:16:36 policy-pap Up 2 minutes 23:16:36 policy-api Up 2 minutes 23:16:36 kafka Up 2 minutes 23:16:36 grafana Up 2 minutes 23:16:36 simulator Up 2 minutes 23:16:36 mariadb Up 2 minutes 23:16:36 compose_zookeeper_1 Up 2 minutes 23:16:36 prometheus Up 2 minutes 23:16:36 + echo 23:16:36 23:16:36 + docker stats --no-stream 23:16:39 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:39 e417f0e35287 policy-apex-pdp 1.87% 190.1MiB / 31.41GiB 0.59% 56.7kB / 91.4kB 0B / 0B 52 23:16:39 120b5bfa683b policy-pap 0.73% 499.5MiB / 31.41GiB 1.55% 2.33MB / 774kB 0B / 153MB 65 23:16:39 d898c735c037 policy-api 0.11% 615.3MiB / 31.41GiB 1.91% 2.49MB / 1.26MB 0B / 0B 58 23:16:39 c7bee733818e kafka 7.73% 387.1MiB / 31.41GiB 1.20% 242kB / 217kB 0B / 606kB 85 23:16:39 861ec82cea04 grafana 0.08% 65.02MiB / 31.41GiB 0.20% 19.6kB / 4.39kB 0B / 24MB 21 23:16:39 1aa1a47f47a8 simulator 0.07% 125.4MiB / 31.41GiB 0.39% 1.58kB / 0B 0B / 0B 78 23:16:39 99656c7b467a mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 71.7MB 28 23:16:39 87f646d9b0d2 compose_zookeeper_1 0.10% 99.64MiB / 31.41GiB 0.31% 58.7kB / 51kB 0B / 332kB 60 23:16:39 7d01a6da3020 prometheus 0.00% 25.52MiB / 31.41GiB 0.08% 139kB / 10.2kB 131kB / 0B 13 23:16:39 + echo 23:16:39 23:16:39 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:39 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:39 + relax_set 23:16:39 + set +e 23:16:39 + set +o pipefail 23:16:39 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:39 ++ echo 'Shut down started!' 23:16:39 Shut down started! 23:16:39 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:39 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:39 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:39 ++ source export-ports.sh 23:16:39 ++ source get-versions.sh 23:16:40 ++ echo 'Collecting logs from docker compose containers...' 23:16:40 Collecting logs from docker compose containers... 23:16:40 ++ docker-compose logs 23:16:42 ++ cat docker_compose.log 23:16:42 Attaching to policy-apex-pdp, policy-pap, policy-api, kafka, policy-db-migrator, grafana, simulator, mariadb, compose_zookeeper_1, prometheus 23:16:42 zookeeper_1 | ===> User 23:16:42 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:42 zookeeper_1 | ===> Configuring ... 23:16:42 zookeeper_1 | ===> Running preflight checks ... 23:16:42 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:42 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:42 zookeeper_1 | ===> Launching ... 23:16:42 zookeeper_1 | ===> Launching zookeeper ... 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,361] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,368] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,368] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,368] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,368] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,369] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,370] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,370] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,370] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,371] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,371] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,372] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,372] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,372] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,372] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,372] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,383] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,386] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,386] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,388] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,398] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,399] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:host.name=87f646d9b0d2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,401] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,402] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,402] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,403] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,403] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,406] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,406] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,407] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,407] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,407] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,427] INFO Logging initialized @563ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,516] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,516] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,536] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,564] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,564] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,565] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,568] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,576] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,594] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,595] INFO Started @731ms (org.eclipse.jetty.server.Server) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,595] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,605] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,607] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,610] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,613] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,631] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,632] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,634] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,634] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,642] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,642] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,646] INFO Snapshot loaded in 11 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,646] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,647] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,655] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,655] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,669] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:42 zookeeper_1 | [2024-02-29 23:14:09,670] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:42 zookeeper_1 | [2024-02-29 23:14:13,222] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.62379877Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-29T23:14:02Z 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624097953Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624109003Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624113633Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624119603Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624123233Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624126383Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624163653Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624173113Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624177893Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624183073Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624187093Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624190763Z level=info msg=Target target=[all] 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624202783Z level=info msg="Path Home" path=/usr/share/grafana 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624206193Z level=info msg="Path Data" path=/var/lib/grafana 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624229734Z level=info msg="Path Logs" path=/var/log/grafana 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624242364Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624266264Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:42 grafana | logger=settings t=2024-02-29T23:14:02.624275244Z level=info msg="App mode production" 23:16:42 grafana | logger=sqlstore t=2024-02-29T23:14:02.624656767Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:42 grafana | logger=sqlstore t=2024-02-29T23:14:02.624684917Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.625445434Z level=info msg="Starting DB migrations" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.626441102Z level=info msg="Executing migration" id="create migration_log table" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.627263689Z level=info msg="Migration successfully executed" id="create migration_log table" duration=821.867µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.633040457Z level=info msg="Executing migration" id="create user table" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.633570041Z level=info msg="Migration successfully executed" id="create user table" duration=529.424µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.640449028Z level=info msg="Executing migration" id="add unique index user.login" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.642132722Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.682874ms 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.647250514Z level=info msg="Executing migration" id="add unique index user.email" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.648586275Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.334571ms 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.65279222Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.653536426Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=743.956µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.659910409Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.661076979Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.16644ms 23:16:42 mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:42 mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:42 mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:42 mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Initializing database files 23:16:42 mariadb | 2024-02-29 23:14:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:42 mariadb | 2024-02-29 23:14:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:42 mariadb | 2024-02-29 23:14:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:42 mariadb | 23:16:42 mariadb | 23:16:42 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:42 mariadb | To do so, start the server, then issue the following command: 23:16:42 mariadb | 23:16:42 mariadb | '/usr/bin/mysql_secure_installation' 23:16:42 mariadb | 23:16:42 mariadb | which will also give you the option of removing the test 23:16:42 mariadb | databases and anonymous user created by default. This is 23:16:42 mariadb | strongly recommended for production servers. 23:16:42 mariadb | 23:16:42 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:42 mariadb | 23:16:42 mariadb | Please report any problems at https://mariadb.org/jira 23:16:42 mariadb | 23:16:42 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:42 mariadb | 23:16:42 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:42 mariadb | https://mariadb.org/get-involved/ 23:16:42 mariadb | 23:16:42 mariadb | 2024-02-29 23:14:05+00:00 [Note] [Entrypoint]: Database files initialized 23:16:42 mariadb | 2024-02-29 23:14:05+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:42 mariadb | 2024-02-29 23:14:05+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Number of transaction pools: 1 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:42 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: 128 rollback segments are active. 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:42 mariadb | 2024-02-29 23:14:06 0 [Note] mariadbd: ready for connections. 23:16:42 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:42 mariadb | 2024-02-29 23:14:06+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:42 mariadb | 2024-02-29 23:14:08+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:42 mariadb | 2024-02-29 23:14:08+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:42 mariadb | 23:16:42 mariadb | 2024-02-29 23:14:08+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:42 mariadb | 23:16:42 mariadb | 2024-02-29 23:14:08+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.664612728Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:42 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:42 mariadb | #!/bin/bash -xv 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.669536109Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.922341ms 23:16:42 policy-apex-pdp | mariadb (172.17.0.3:3306) open 23:16:42 kafka | ===> User 23:16:42 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.673672613Z level=info msg="Executing migration" id="create user table v2" 23:16:42 policy-apex-pdp | Waiting for kafka port 9092... 23:16:42 policy-api | Waiting for mariadb port 3306... 23:16:42 policy-db-migrator | Waiting for mariadb port 3306... 23:16:42 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:42 kafka | ===> Configuring ... 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.67452114Z level=info msg="Migration successfully executed" id="create user table v2" duration=847.747µs 23:16:42 policy-apex-pdp | kafka (172.17.0.8:9092) open 23:16:42 prometheus | ts=2024-02-29T23:14:01.584Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:42 policy-api | mariadb (172.17.0.3:3306) open 23:16:42 policy-pap | Waiting for mariadb port 3306... 23:16:42 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:42 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:42 mariadb | # 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.680347308Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:42 policy-apex-pdp | Waiting for pap port 6969... 23:16:42 prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" 23:16:42 policy-api | Waiting for policy-db-migrator port 6824... 23:16:42 policy-pap | mariadb (172.17.0.3:3306) open 23:16:42 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:42 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:42 kafka | Running in Zookeeper mode... 23:16:42 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.681136415Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=791.027µs 23:16:42 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:42 prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" 23:16:42 policy-api | policy-db-migrator (172.17.0.7:6824) open 23:16:42 policy-pap | Waiting for kafka port 9092... 23:16:42 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:42 simulator | overriding logback.xml 23:16:42 kafka | ===> Running preflight checks ... 23:16:42 mariadb | # you may not use this file except in compliance with the License. 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.684655684Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.685444551Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=788.737µs 23:16:42 prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:42 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:42 policy-pap | kafka (172.17.0.8:9092) open 23:16:42 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:16:42 simulator | 2024-02-29 23:14:05,300 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:42 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:42 mariadb | # You may obtain a copy of the License at 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.688728308Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.689181731Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=453.173µs 23:16:42 prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:42 policy-api | 23:16:42 policy-pap | Waiting for api port 6969... 23:16:42 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 23:16:42 simulator | 2024-02-29 23:14:05,364 INFO org.onap.policy.models.simulators starting 23:16:42 kafka | ===> Check if Zookeeper is healthy ... 23:16:42 mariadb | # 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.695540134Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.696543992Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.001828ms 23:16:42 prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:42 policy-api | . ____ _ __ _ _ 23:16:42 policy-pap | api (172.17.0.9:6969) open 23:16:42 policy-db-migrator | 321 blocks 23:16:42 simulator | 2024-02-29 23:14:05,364 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:42 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:42 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.700678657Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.702702484Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.987876ms 23:16:42 prometheus | ts=2024-02-29T23:14:01.587Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:42 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:42 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:42 policy-db-migrator | Preparing upgrade release version: 0800 23:16:42 simulator | 2024-02-29 23:14:05,602 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:42 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:42 mariadb | # 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.706477155Z level=info msg="Executing migration" id="Update user table charset" 23:16:42 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:42 prometheus | ts=2024-02-29T23:14:01.588Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:16:42 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:42 simulator | 2024-02-29 23:14:05,603 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:42 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:42 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.706499095Z level=info msg="Migration successfully executed" id="Update user table charset" duration=22.42µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.709613601Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:42 prometheus | ts=2024-02-29T23:14:01.594Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:42 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:42 policy-db-migrator | Preparing upgrade release version: 0900 23:16:42 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:42 simulator | 2024-02-29 23:14:05,765 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:42 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:42 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:42 policy-apex-pdp | [2024-02-29T23:14:44.788+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.710436458Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=822.887µs 23:16:42 prometheus | ts=2024-02-29T23:14:01.594Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:42 policy-pap | 23:16:42 policy-db-migrator | Preparing upgrade release version: 1000 23:16:42 simulator | 2024-02-29 23:14:05,776 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:42 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:42 policy-apex-pdp | [2024-02-29T23:14:44.999+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.716299226Z level=info msg="Executing migration" id="Add missing user data" 23:16:42 prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:42 policy-pap | . ____ _ __ _ _ 23:16:42 policy-db-migrator | Preparing upgrade release version: 1100 23:16:42 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:42 simulator | 2024-02-29 23:14:05,778 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 kafka | [2024-02-29 23:14:13,156] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | # See the License for the specific language governing permissions and 23:16:42 policy-apex-pdp | allow.auto.create.topics = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.71676875Z level=info msg="Migration successfully executed" id="Add missing user data" duration=468.944µs 23:16:42 prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.03µs 23:16:42 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:42 policy-db-migrator | Preparing upgrade release version: 1200 23:16:42 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:42 simulator | 2024-02-29 23:14:05,784 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:host.name=c7bee733818e (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | # limitations under the License. 23:16:42 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.720799543Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:42 prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:42 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:42 policy-db-migrator | Preparing upgrade release version: 1300 23:16:42 policy-api | :: Spring Boot :: (v3.1.8) 23:16:42 simulator | 2024-02-29 23:14:05,848 INFO Session workerName=node0 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | 23:16:42 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:42 policy-apex-pdp | auto.offset.reset = latest 23:16:42 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.723530196Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.730103ms 23:16:42 policy-db-migrator | Done 23:16:42 policy-api | 23:16:42 simulator | 2024-02-29 23:14:06,410 INFO Using GSON for REST calls 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:42 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:42 policy-apex-pdp | check.crcs = true 23:16:42 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.72768539Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:42 policy-db-migrator | name version 23:16:42 policy-api | [2024-02-29T23:14:19.017+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:42 simulator | 2024-02-29 23:14:06,497 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | do 23:16:42 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:42 policy-apex-pdp | client.id = consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-1 23:16:42 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.729009171Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.323851ms 23:16:42 policy-db-migrator | policyadmin 0 23:16:42 policy-api | [2024-02-29T23:14:19.020+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:42 simulator | 2024-02-29 23:14:06,506 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:42 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | client.rack = 23:16:42 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:42 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.73246382Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:42 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:42 policy-api | [2024-02-29T23:14:20.927+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:42 simulator | 2024-02-29 23:14:06,513 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1785ms 23:16:42 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:42 policy-apex-pdp | enable.auto.commit = true 23:16:42 policy-pap | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.73366845Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.20366ms 23:16:42 policy-db-migrator | upgrade: 0 -> 1300 23:16:42 policy-api | [2024-02-29T23:14:21.029+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 91 ms. Found 6 JPA repository interfaces. 23:16:42 simulator | 2024-02-29 23:14:06,514 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4264 ms. 23:16:42 mariadb | done 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | exclude.internal.topics = true 23:16:42 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:42 policy-pap | [2024-02-29T23:14:32.932+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.739812641Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:21.493+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:42 simulator | 2024-02-29 23:14:06,523 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:42 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:42 policy-apex-pdp | fetch.min.bytes = 1 23:16:42 policy-pap | [2024-02-29T23:14:32.935+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.752907499Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.095888ms 23:16:42 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:42 policy-api | [2024-02-29T23:14:21.494+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:42 simulator | 2024-02-29 23:14:06,525 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:42 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | group.id = 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f 23:16:42 policy-apex-pdp | group.instance.id = null 23:16:42 policy-pap | [2024-02-29T23:14:35.018+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.756724701Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:22.267+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:42 simulator | 2024-02-29 23:14:06,526 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:42 policy-apex-pdp | interceptor.classes = [] 23:16:42 policy-pap | [2024-02-29T23:14:35.155+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 125 ms. Found 7 JPA repository interfaces. 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.757401536Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=676.415µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:42 policy-api | [2024-02-29T23:14:22.279+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:42 simulator | 2024-02-29 23:14:06,526 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | internal.leave.group.on.close = true 23:16:42 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:42 policy-pap | [2024-02-29T23:14:35.611+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.760799605Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:22.282+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:42 simulator | 2024-02-29 23:14:06,528 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:42 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | isolation.level = read_uncommitted 23:16:42 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 policy-pap | [2024-02-29T23:14:35.612+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.761629862Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=829.627µs 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:22.282+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:42 simulator | 2024-02-29 23:14:06,541 INFO Session workerName=node0 23:16:42 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:42 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:42 policy-pap | [2024-02-29T23:14:36.402+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.767328889Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:22.424+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:42 simulator | 2024-02-29 23:14:06,607 INFO Using GSON for REST calls 23:16:42 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | max.poll.records = 500 23:16:42 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:42 policy-pap | [2024-02-29T23:14:36.412+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.768116935Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=787.366µs 23:16:42 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:42 policy-api | [2024-02-29T23:14:22.424+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3316 ms 23:16:42 simulator | 2024-02-29 23:14:06,620 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:16:42 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | metric.reporters = [] 23:16:42 policy-apex-pdp | metrics.num.samples = 2 23:16:42 policy-pap | [2024-02-29T23:14:36.415+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.772427581Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:22.923+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:42 simulator | 2024-02-29 23:14:06,621 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:42 simulator | 2024-02-29 23:14:06,622 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1893ms 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | metrics.recording.level = INFO 23:16:42 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:42 policy-pap | [2024-02-29T23:14:36.415+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.773226717Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=798.596µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:42 policy-api | [2024-02-29T23:14:23.027+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:42 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:42 kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:42 simulator | 2024-02-29 23:14:06,622 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4904 ms. 23:16:42 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:42 prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:42 policy-pap | [2024-02-29T23:14:36.515+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.776930348Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:23.031+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:42 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:42 kafka | [2024-02-29 23:14:13,160] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:16:42 simulator | 2024-02-29 23:14:06,623 INFO org.onap.policy.models.simulators starting SO simulator 23:16:42 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:42 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:42 policy-pap | [2024-02-29T23:14:36.516+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3490 ms 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.777755325Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=824.097µs 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:23.085+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:42 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:42 kafka | [2024-02-29 23:14:13,164] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:42 simulator | 2024-02-29 23:14:06,627 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:42 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:42 policy-apex-pdp | request.timeout.ms = 30000 23:16:42 policy-pap | [2024-02-29T23:14:37.012+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.783336571Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:23.466+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:42 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:42 kafka | [2024-02-29 23:14:13,169] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:42 simulator | 2024-02-29 23:14:06,627 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 policy-apex-pdp | retry.backoff.ms = 100 23:16:42 prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=81.521µs wal_replay_duration=518.107µs wbl_replay_duration=320ns total_replay_duration=633.119µs 23:16:42 policy-pap | [2024-02-29T23:14:37.106+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.783385362Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=49.161µs 23:16:42 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:42 policy-api | [2024-02-29T23:14:23.488+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:42 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:42 kafka | [2024-02-29 23:14:13,176] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:42 simulator | 2024-02-29 23:14:06,632 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:42 prometheus | ts=2024-02-29T23:14:01.599Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:16:42 policy-pap | [2024-02-29T23:14:37.110+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.786109814Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:23.598+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 23:16:42 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:42 kafka | [2024-02-29 23:14:13,191] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:42 simulator | 2024-02-29 23:14:06,633 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:42 policy-apex-pdp | sasl.jaas.config = null 23:16:42 prometheus | ts=2024-02-29T23:14:01.599Z caller=main.go:1142 level=info msg="TSDB started" 23:16:42 policy-pap | [2024-02-29T23:14:37.172+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.787713338Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.609424ms 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:42 policy-api | [2024-02-29T23:14:23.601+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:42 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:42 kafka | [2024-02-29 23:14:13,192] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:42 simulator | 2024-02-29 23:14:06,635 INFO Session workerName=node0 23:16:42 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 prometheus | ts=2024-02-29T23:14:01.599Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:42 policy-pap | [2024-02-29T23:14:37.606+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.79156876Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:25.687+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:42 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:42 kafka | [2024-02-29 23:14:13,204] INFO Socket connection established, initiating session, client: /172.17.0.8:33806, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:42 simulator | 2024-02-29 23:14:06,718 INFO Using GSON for REST calls 23:16:42 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 prometheus | ts=2024-02-29T23:14:01.600Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.116515ms db_storage=1.79µs remote_storage=2.53µs web_handler=720ns query_engine=1.83µs scrape=284.334µs scrape_sd=148.342µs notify=33.271µs notify_sd=15.79µs rules=2.52µs tracing=7.05µs 23:16:42 policy-pap | [2024-02-29T23:14:37.632+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.792740449Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.18067ms 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:25.692+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:42 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:42 kafka | [2024-02-29 23:14:13,239] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000396720000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:42 simulator | 2024-02-29 23:14:06,731 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:16:42 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:42 prometheus | ts=2024-02-29T23:14:01.601Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:16:42 policy-pap | [2024-02-29T23:14:37.765+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7b6e5c12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.798578128Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:26.867+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:42 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:42 kafka | [2024-02-29 23:14:13,362] INFO Session: 0x100000396720000 closed (org.apache.zookeeper.ZooKeeper) 23:16:42 simulator | 2024-02-29 23:14:06,733 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:42 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 prometheus | ts=2024-02-29T23:14:01.601Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:42 policy-pap | [2024-02-29T23:14:37.767+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.799294594Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=716.276µs 23:16:42 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:42 policy-api | [2024-02-29T23:14:27.720+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:42 mariadb | 23:16:42 kafka | [2024-02-29 23:14:13,362] INFO EventThread shut down for session: 0x100000396720000 (org.apache.zookeeper.ClientCnxn) 23:16:42 simulator | 2024-02-29 23:14:06,733 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @2004ms 23:16:42 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 policy-pap | [2024-02-29T23:14:40.002+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.804061423Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:28.941+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:42 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:42 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:42 simulator | 2024-02-29 23:14:06,734 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4898 ms. 23:16:42 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:42 policy-pap | [2024-02-29T23:14:40.006+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.805218303Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.167599ms 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:42 policy-api | [2024-02-29T23:14:29.150+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@607c7f58, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4bbb00a4, org.springframework.security.web.context.SecurityContextHolderFilter@6e11d059, org.springframework.security.web.header.HeaderWriterFilter@1d123972, org.springframework.security.web.authentication.logout.LogoutFilter@54e1e8a7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@206d4413, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@19bd1f98, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@69cf9acb, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@543d242e, org.springframework.security.web.access.ExceptionTranslationFilter@5b3063b7, org.springframework.security.web.access.intercept.AuthorizationFilter@407bfc49] 23:16:42 kafka | ===> Launching ... 23:16:42 simulator | 2024-02-29 23:14:06,736 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:42 policy-apex-pdp | sasl.login.class = null 23:16:42 policy-pap | [2024-02-29T23:14:40.654+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.809494438Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:30.083+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:42 kafka | ===> Launching kafka ... 23:16:42 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:42 simulator | 2024-02-29 23:14:06,742 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:42 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:42 policy-pap | [2024-02-29T23:14:41.125+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.81335333Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.859702ms 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:30.223+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:42 kafka | [2024-02-29 23:14:14,130] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:42 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:42 simulator | 2024-02-29 23:14:06,742 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:42 policy-pap | [2024-02-29T23:14:41.258+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.819015917Z level=info msg="Executing migration" id="create temp_user v2" 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:30.249+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:42 kafka | [2024-02-29 23:14:14,523] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:42 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:42 simulator | 2024-02-29 23:14:06,743 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:42 policy-pap | [2024-02-29T23:14:41.569+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.819899884Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=883.217µs 23:16:42 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:42 policy-api | [2024-02-29T23:14:30.268+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.073 seconds (process running for 12.692) 23:16:42 kafka | [2024-02-29 23:14:14,600] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:42 mariadb | 23:16:42 simulator | 2024-02-29 23:14:06,744 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:42 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:42 policy-pap | allow.auto.create.topics = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.823868947Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:39.928+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:42 kafka | [2024-02-29 23:14:14,601] INFO starting (kafka.server.KafkaServer) 23:16:42 mariadb | 2024-02-29 23:14:09+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:42 simulator | 2024-02-29 23:14:06,789 INFO Session workerName=node0 23:16:42 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:42 policy-pap | auto.commit.interval.ms = 5000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.824714224Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=844.857µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:42 policy-api | [2024-02-29T23:14:39.928+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:42 kafka | [2024-02-29 23:14:14,601] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:42 simulator | 2024-02-29 23:14:06,847 INFO Using GSON for REST calls 23:16:42 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:42 policy-pap | auto.include.jmx.reporter = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.828769178Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-api | [2024-02-29T23:14:39.930+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:16:42 kafka | [2024-02-29 23:14:14,615] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:42 simulator | 2024-02-29 23:14:06,863 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:16:42 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:42 policy-pap | auto.offset.reset = latest 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.829655005Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=885.007µs 23:16:42 policy-db-migrator | 23:16:42 policy-api | [2024-02-29T23:14:49.615+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 23:16:42 kafka | [2024-02-29 23:14:14,619] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Starting shutdown... 23:16:42 simulator | 2024-02-29 23:14:06,865 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:42 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:42 policy-pap | bootstrap.servers = [kafka:9092] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.833659668Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:42 policy-db-migrator | 23:16:42 policy-api | [] 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:host.name=c7bee733818e (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:42 simulator | 2024-02-29 23:14:06,865 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @2137ms 23:16:42 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:42 policy-pap | check.crcs = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.834515365Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=855.397µs 23:16:42 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Buffer pool(s) dump completed at 240229 23:14:09 23:16:42 simulator | 2024-02-29 23:14:06,866 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4877 ms. 23:16:42 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.83992006Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:42 simulator | 2024-02-29 23:14:06,870 INFO org.onap.policy.models.simulators started 23:16:42 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:42 policy-pap | client.id = consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.840778157Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=857.767µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Shutdown completed; log sequence number 347334; transaction id 298 23:16:42 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:42 policy-pap | client.rack = 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.844524848Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:09 0 [Note] mariadbd: Shutdown complete 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 policy-pap | connections.max.idle.ms = 540000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.845023132Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=497.834µs 23:16:42 policy-db-migrator | 23:16:42 mariadb | 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 policy-pap | default.api.timeout.ms = 60000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.849092286Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:09+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 policy-pap | enable.auto.commit = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.850193145Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.100379ms 23:16:42 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:42 mariadb | 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 policy-pap | exclude.internal.topics = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.85808985Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:09+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:42 policy-pap | fetch.max.bytes = 52428800 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.858579824Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=489.334µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 mariadb | 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:42 policy-pap | fetch.max.wait.ms = 500 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.861845402Z level=info msg="Executing migration" id="create star table" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:42 policy-pap | fetch.min.bytes = 1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.86288132Z level=info msg="Migration successfully executed" id="create star table" duration=1.033798ms 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:42 policy-pap | group.id = ee5900cb-eee5-431a-a953-12f2e7174bf4 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.867204206Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Number of transaction pools: 1 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | security.providers = null 23:16:42 policy-pap | group.instance.id = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.868629538Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.425132ms 23:16:42 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | send.buffer.bytes = 131072 23:16:42 policy-pap | heartbeat.interval.ms = 3000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.875803227Z level=info msg="Executing migration" id="create org table v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | session.timeout.ms = 45000 23:16:42 policy-pap | interceptor.classes = [] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.876623184Z level=info msg="Migration successfully executed" id="create org table v1" duration=819.127µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:42 policy-pap | internal.leave.group.on.close = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.881714406Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:42 kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:42 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.883133898Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.426822ms 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:42 kafka | [2024-02-29 23:14:14,622] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 23:16:42 policy-apex-pdp | ssl.cipher.suites = null 23:16:42 policy-pap | isolation.level = read_uncommitted 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.88695114Z level=info msg="Executing migration" id="create org_user table v1" 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:42 kafka | [2024-02-29 23:14:14,626] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:42 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.88818012Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.22441ms 23:16:42 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:42 kafka | [2024-02-29 23:14:14,633] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:42 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:42 policy-pap | max.partition.fetch.bytes = 1048576 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.892248443Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: 128 rollback segments are active. 23:16:42 kafka | [2024-02-29 23:14:14,635] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:42 policy-apex-pdp | ssl.engine.factory.class = null 23:16:42 policy-pap | max.poll.interval.ms = 300000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.893639705Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.390862ms 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:42 policy-apex-pdp | ssl.key.password = null 23:16:42 policy-pap | max.poll.records = 500 23:16:42 kafka | [2024-02-29 23:14:14,643] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.899393002Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:42 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:42 policy-pap | metadata.max.age.ms = 300000 23:16:42 kafka | [2024-02-29 23:14:14,651] INFO Socket connection established, initiating session, client: /172.17.0.8:33808, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.90034582Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=952.278µs 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: log sequence number 347334; transaction id 299 23:16:42 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:42 policy-pap | metric.reporters = [] 23:16:42 kafka | [2024-02-29 23:14:14,661] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000396720001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.903641658Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:42 policy-apex-pdp | ssl.keystore.key = null 23:16:42 policy-pap | metrics.num.samples = 2 23:16:42 kafka | [2024-02-29 23:14:14,666] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.904877688Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.23597ms 23:16:42 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:42 policy-apex-pdp | ssl.keystore.location = null 23:16:42 policy-pap | metrics.recording.level = INFO 23:16:42 kafka | [2024-02-29 23:14:14,986] INFO Cluster ID = FqFLOU6jRgiQltXq-uD-BA (kafka.server.KafkaServer) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.908517578Z level=info msg="Executing migration" id="Update org table charset" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:42 policy-apex-pdp | ssl.keystore.password = null 23:16:42 policy-pap | metrics.sample.window.ms = 30000 23:16:42 kafka | [2024-02-29 23:14:14,991] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.908582628Z level=info msg="Migration successfully executed" id="Update org table charset" duration=52.64µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:42 policy-apex-pdp | ssl.keystore.type = JKS 23:16:42 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:42 kafka | [2024-02-29 23:14:15,046] INFO KafkaConfig values: 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.912219219Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:42 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:42 policy-pap | receive.buffer.bytes = 65536 23:16:42 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.912261499Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=43.38µs 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] Server socket created on IP: '::'. 23:16:42 policy-apex-pdp | ssl.provider = null 23:16:42 policy-pap | reconnect.backoff.max.ms = 1000 23:16:42 kafka | alter.config.policy.class.name = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.918905954Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] mariadbd: ready for connections. 23:16:42 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:42 policy-pap | reconnect.backoff.ms = 50 23:16:42 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.919182556Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=275.802µs 23:16:42 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:42 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:42 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:42 policy-pap | request.timeout.ms = 30000 23:16:42 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.922780166Z level=info msg="Executing migration" id="create dashboard table" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Buffer pool(s) load completed at 240229 23:14:10 23:16:42 policy-apex-pdp | ssl.truststore.certificates = null 23:16:42 policy-pap | retry.backoff.ms = 100 23:16:42 kafka | authorizer.class.name = 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.924143047Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.341571ms 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 mariadb | 2024-02-29 23:14:10 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:42 policy-apex-pdp | ssl.truststore.location = null 23:16:42 policy-pap | sasl.client.callback.handler.class = null 23:16:42 kafka | auto.create.topics.enable = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.928488443Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:42 policy-db-migrator | -------------- 23:16:42 mariadb | 2024-02-29 23:14:10 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:16:42 policy-apex-pdp | ssl.truststore.password = null 23:16:42 policy-pap | sasl.jaas.config = null 23:16:42 kafka | auto.include.jmx.reporter = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.929926225Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.445202ms 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:42 policy-apex-pdp | ssl.truststore.type = JKS 23:16:42 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 kafka | auto.leader.rebalance.enable = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.933526985Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:42 policy-db-migrator | 23:16:42 mariadb | 2024-02-29 23:14:10 10 [Warning] Aborted connection 10 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:42 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 kafka | background.threads = 10 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.934526143Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=998.618µs 23:16:42 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:42 policy-apex-pdp | 23:16:42 policy-pap | sasl.kerberos.service.name = null 23:16:42 kafka | broker.heartbeat.interval.ms = 2000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.940834486Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.173+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 kafka | broker.id = 1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.942030466Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.1949ms 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.173+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 kafka | broker.id.generation.enable = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.946264421Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.173+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248485171 23:16:42 policy-pap | sasl.login.callback.handler.class = null 23:16:42 kafka | broker.rack = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.947680032Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.414801ms 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.176+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-1, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Subscribed to topic(s): policy-pdp-pap 23:16:42 policy-pap | sasl.login.class = null 23:16:42 kafka | broker.session.timeout.ms = 9000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.951582765Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.194+00:00|INFO|ServiceManager|main] service manager starting 23:16:42 policy-pap | sasl.login.connect.timeout.ms = null 23:16:42 kafka | client.quota.callback.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.952451042Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=862.517µs 23:16:42 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.194+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:42 policy-pap | sasl.login.read.timeout.ms = null 23:16:42 kafka | compression.type = producer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.958823415Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.204+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:42 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:42 kafka | connection.failed.authentication.delay.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.968004221Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.181536ms 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.232+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:42 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:42 kafka | connections.max.idle.ms = 600000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.971912883Z level=info msg="Executing migration" id="create dashboard v2" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | allow.auto.create.topics = true 23:16:42 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:42 kafka | connections.max.reauth.ms = 0 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.97271012Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=796.547µs 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:42 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:42 kafka | control.plane.listener.name = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.976089238Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:42 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:42 kafka | controlled.shutdown.enable = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.976906654Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=816.996µs 23:16:42 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:42 policy-apex-pdp | auto.offset.reset = latest 23:16:42 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:42 kafka | controlled.shutdown.max.retries = 3 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:42 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:42 policy-pap | sasl.mechanism = GSSAPI 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | check.crcs = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.983733031Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:42 kafka | controller.listener.names = null 23:16:42 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.98485871Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.123589ms 23:16:42 kafka | controller.quorum.append.linger.ms = 25 23:16:42 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | client.id = consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.988640562Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:42 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:42 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | client.rack = 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.989288907Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=647.155µs 23:16:42 kafka | controller.quorum.election.timeout.ms = 1000 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.992927647Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:42 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:42 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.993999156Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.072179ms 23:16:42 kafka | controller.quorum.request.timeout.ms = 2000 23:16:42 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:02.999991916Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:42 kafka | controller.quorum.retry.backoff.ms = 20 23:16:42 policy-apex-pdp | enable.auto.commit = true 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.000056836Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=68.86µs 23:16:42 kafka | controller.quorum.voters = [] 23:16:42 policy-apex-pdp | exclude.internal.topics = true 23:16:42 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.003708756Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:42 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:42 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:42 policy-db-migrator | 23:16:42 kafka | controller.quota.window.num = 11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.006676623Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.966827ms 23:16:42 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:42 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:42 kafka | controller.quota.window.size.seconds = 1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.05784908Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:42 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:42 policy-apex-pdp | fetch.min.bytes = 1 23:16:42 policy-pap | security.protocol = PLAINTEXT 23:16:42 kafka | controller.socket.timeout.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.060713429Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.859229ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | group.id = 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f 23:16:42 policy-pap | security.providers = null 23:16:42 kafka | create.topic.policy.class.name = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.064211294Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | group.instance.id = null 23:16:42 policy-pap | send.buffer.bytes = 131072 23:16:42 kafka | default.replication.factor = 1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.066037652Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.825798ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:42 policy-pap | session.timeout.ms = 45000 23:16:42 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.072424555Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | interceptor.classes = [] 23:16:42 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:42 kafka | delegation.token.expiry.time.ms = 86400000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.073787729Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.362474ms 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | internal.leave.group.on.close = true 23:16:42 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:42 kafka | delegation.token.master.key = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.077310554Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:42 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:42 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:42 policy-pap | ssl.cipher.suites = null 23:16:42 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.079438755Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.137141ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | isolation.level = read_uncommitted 23:16:42 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 kafka | delegation.token.secret.key = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.083197503Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:42 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.084063601Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=860.228µs 23:16:42 policy-pap | ssl.engine.factory.class = null 23:16:42 kafka | delete.topic.enable = true 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.090256223Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:42 policy-pap | ssl.key.password = null 23:16:42 kafka | early.start.listeners = null 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | max.poll.records = 500 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.091373094Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.116511ms 23:16:42 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:42 kafka | fetch.max.bytes = 57671680 23:16:42 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:42 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.094784128Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:42 policy-pap | ssl.keystore.certificate.chain = null 23:16:42 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | metric.reporters = [] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.094815468Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=32.02µs 23:16:42 policy-pap | ssl.keystore.key = null 23:16:42 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | metrics.num.samples = 2 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.098414334Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:42 policy-pap | ssl.keystore.location = null 23:16:42 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | metrics.recording.level = INFO 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.098446885Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=33.461µs 23:16:42 policy-pap | ssl.keystore.password = null 23:16:42 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.105245552Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:42 policy-pap | ssl.keystore.type = JKS 23:16:42 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.107829528Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.581416ms 23:16:42 policy-pap | ssl.protocol = TLSv1.3 23:16:42 kafka | group.consumer.max.size = 2147483647 23:16:42 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:42 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.115091531Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:42 policy-pap | ssl.provider = null 23:16:42 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.11701023Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.92101ms 23:16:42 policy-pap | ssl.secure.random.implementation = null 23:16:42 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.120717677Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:42 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:42 kafka | group.consumer.session.timeout.ms = 45000 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | request.timeout.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.122681306Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.963139ms 23:16:42 policy-pap | ssl.truststore.certificates = null 23:16:42 kafka | group.coordinator.new.enable = false 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.128945419Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:42 policy-pap | ssl.truststore.location = null 23:16:42 kafka | group.coordinator.threads = 1 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.131445464Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.497935ms 23:16:42 policy-pap | ssl.truststore.password = null 23:16:42 kafka | group.initial.rebalance.delay.ms = 3000 23:16:42 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:42 policy-apex-pdp | sasl.jaas.config = null 23:16:42 policy-pap | ssl.truststore.type = JKS 23:16:42 kafka | group.max.session.timeout.ms = 1800000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.135525884Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:42 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 kafka | group.max.size = 2147483647 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.136003099Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=476.835µs 23:16:42 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 policy-pap | 23:16:42 kafka | group.min.session.timeout.ms = 6000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.139922728Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:42 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:42 policy-pap | [2024-02-29T23:14:41.774+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 kafka | initial.broker.registration.timeout.ms = 60000 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.141330032Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.406764ms 23:16:42 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 policy-pap | [2024-02-29T23:14:41.774+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 kafka | inter.broker.listener.name = PLAINTEXT 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.146498644Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:42 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 policy-pap | [2024-02-29T23:14:41.774+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248481772 23:16:42 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:42 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.147799167Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.300543ms 23:16:42 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:42 policy-pap | [2024-02-29T23:14:41.777+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-1, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Subscribed to topic(s): policy-pdp-pap 23:16:42 kafka | kafka.metrics.polling.interval.secs = 10 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.151400363Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:42 policy-apex-pdp | sasl.login.class = null 23:16:42 policy-pap | [2024-02-29T23:14:41.778+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:42 kafka | kafka.metrics.reporters = [] 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.151429353Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=26.65µs 23:16:42 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:42 policy-pap | allow.auto.create.topics = true 23:16:42 kafka | leader.imbalance.check.interval.seconds = 300 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.15414002Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:42 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:42 policy-pap | auto.commit.interval.ms = 5000 23:16:42 kafka | leader.imbalance.per.broker.percentage = 10 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.154964818Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=821.588µs 23:16:42 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:42 policy-pap | auto.include.jmx.reporter = true 23:16:42 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.160607434Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:42 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:42 policy-pap | auto.offset.reset = latest 23:16:42 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:42 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.163016188Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=2.396394ms 23:16:42 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:42 policy-pap | bootstrap.servers = [kafka:9092] 23:16:42 kafka | log.cleaner.backoff.ms = 15000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.166816686Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:42 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:42 policy-pap | check.crcs = true 23:16:42 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.174102339Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.276723ms 23:16:42 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:42 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:42 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.176939457Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:42 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:42 policy-pap | client.id = consumer-policy-pap-2 23:16:42 kafka | log.cleaner.enable = true 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.177573783Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=634.186µs 23:16:42 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:42 policy-pap | client.rack = 23:16:42 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.181652754Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:42 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 policy-pap | connections.max.idle.ms = 540000 23:16:42 kafka | log.cleaner.io.buffer.size = 524288 23:16:42 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.183371391Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.716447ms 23:16:42 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:42 policy-pap | default.api.timeout.ms = 60000 23:16:42 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.187036688Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:42 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:42 policy-pap | enable.auto.commit = true 23:16:42 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.188429872Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.392334ms 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 policy-pap | exclude.internal.topics = true 23:16:42 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.191959667Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 policy-pap | fetch.max.bytes = 52428800 23:16:42 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.19226378Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=308.173µs 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 policy-pap | fetch.max.wait.ms = 500 23:16:42 kafka | log.cleaner.threads = 1 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.195589513Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 policy-pap | fetch.min.bytes = 1 23:16:42 kafka | log.cleanup.policy = [delete] 23:16:42 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.19621817Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=626.226µs 23:16:42 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:42 policy-pap | group.id = policy-pap 23:16:42 kafka | log.dir = /tmp/kafka-logs 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.20130154Z level=info msg="Executing migration" id="Add check_sum column" 23:16:42 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:42 policy-pap | group.instance.id = null 23:16:42 kafka | log.dirs = /var/lib/kafka/data 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.204729074Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.426524ms 23:16:42 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:42 policy-pap | heartbeat.interval.ms = 3000 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.20830234Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:42 policy-pap | interceptor.classes = [] 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.flush.interval.messages = 9223372036854775807 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.209126098Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=823.398µs 23:16:42 policy-db-migrator | 23:16:42 kafka | log.flush.interval.ms = null 23:16:42 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:42 policy-pap | internal.leave.group.on.close = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.215113598Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:42 policy-db-migrator | 23:16:42 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:42 policy-apex-pdp | security.providers = null 23:16:42 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.215385601Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=272.212µs 23:16:42 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:42 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:42 policy-apex-pdp | send.buffer.bytes = 131072 23:16:42 policy-pap | isolation.level = read_uncommitted 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.218959846Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:42 policy-apex-pdp | session.timeout.ms = 45000 23:16:42 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.219225809Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=266.683µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:42 kafka | log.index.interval.bytes = 4096 23:16:42 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:42 policy-pap | max.partition.fetch.bytes = 1048576 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.222883105Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.index.size.max.bytes = 10485760 23:16:42 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:42 policy-pap | max.poll.interval.ms = 300000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.224171618Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.288433ms 23:16:42 policy-db-migrator | 23:16:42 kafka | log.local.retention.bytes = -2 23:16:42 policy-apex-pdp | ssl.cipher.suites = null 23:16:42 policy-pap | max.poll.records = 500 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.227655723Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:42 policy-db-migrator | 23:16:42 kafka | log.local.retention.ms = -2 23:16:42 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 policy-pap | metadata.max.age.ms = 300000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.231088377Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.432934ms 23:16:42 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:42 kafka | log.message.downconversion.enable = true 23:16:42 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:42 policy-pap | metric.reporters = [] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.237033956Z level=info msg="Executing migration" id="create data_source table" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.message.format.version = 3.0-IV1 23:16:42 policy-apex-pdp | ssl.engine.factory.class = null 23:16:42 policy-pap | metrics.num.samples = 2 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.237931325Z level=info msg="Migration successfully executed" id="create data_source table" duration=896.659µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:42 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:42 policy-apex-pdp | ssl.key.password = null 23:16:42 policy-pap | metrics.recording.level = INFO 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.24135875Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:42 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:42 policy-pap | metrics.sample.window.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.242614632Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.255552ms 23:16:42 policy-db-migrator | 23:16:42 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:42 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:42 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.246246358Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:42 policy-db-migrator | 23:16:42 kafka | log.message.timestamp.type = CreateTime 23:16:42 policy-apex-pdp | ssl.keystore.key = null 23:16:42 policy-pap | receive.buffer.bytes = 65536 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.247531391Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.279733ms 23:16:42 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:42 kafka | log.preallocate = false 23:16:42 policy-apex-pdp | ssl.keystore.location = null 23:16:42 policy-pap | reconnect.backoff.max.ms = 1000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.253299728Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.retention.bytes = -1 23:16:42 policy-apex-pdp | ssl.keystore.password = null 23:16:42 policy-pap | reconnect.backoff.ms = 50 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.254114106Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=814.168µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 kafka | log.retention.check.interval.ms = 300000 23:16:42 policy-apex-pdp | ssl.keystore.type = JKS 23:16:42 policy-pap | request.timeout.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.258078986Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.retention.hours = 168 23:16:42 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:42 policy-pap | retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.259259618Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.173802ms 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | ssl.provider = null 23:16:42 policy-pap | sasl.client.callback.handler.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.262783763Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:42 kafka | log.retention.minutes = null 23:16:42 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:42 policy-pap | sasl.jaas.config = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.271896924Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.113611ms 23:16:42 policy-db-migrator | 23:16:42 kafka | log.retention.ms = null 23:16:42 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.277455709Z level=info msg="Executing migration" id="create data_source table v2" 23:16:42 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:42 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:42 kafka | log.roll.hours = 168 23:16:42 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.278305038Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=848.779µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | ssl.truststore.certificates = null 23:16:42 policy-pap | sasl.kerberos.service.name = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.282030365Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 kafka | log.roll.jitter.hours = 0 23:16:42 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.282910094Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=874.589µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | ssl.truststore.location = null 23:16:42 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.288780202Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | ssl.truststore.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.289735192Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=954.36µs 23:16:42 policy-db-migrator | 23:16:42 kafka | log.roll.jitter.ms = null 23:16:42 policy-pap | sasl.login.callback.handler.class = null 23:16:42 policy-apex-pdp | ssl.truststore.type = JKS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.29355687Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:42 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:42 kafka | log.roll.ms = null 23:16:42 policy-pap | sasl.login.class = null 23:16:42 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.294423579Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=866.858µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | log.segment.bytes = 1073741824 23:16:42 policy-pap | sasl.login.connect.timeout.ms = null 23:16:42 policy-apex-pdp | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.298340627Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 kafka | log.segment.delete.delay.ms = 60000 23:16:42 policy-pap | sasl.login.read.timeout.ms = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.302123275Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.783038ms 23:16:42 kafka | max.connection.creation.rate = 2147483647 23:16:42 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.241+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.307948423Z level=info msg="Executing migration" id="Add secure json data column" 23:16:42 kafka | max.connections = 2147483647 23:16:42 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.241+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.310359927Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.410984ms 23:16:42 kafka | max.connections.per.ip = 2147483647 23:16:42 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.241+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248485241 23:16:42 policy-db-migrator | 23:16:42 kafka | max.connections.per.ip.overrides = 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.242+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Subscribed to topic(s): policy-pdp-pap 23:16:42 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.314295637Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:42 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:42 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.314340967Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=52.62µs 23:16:42 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.245+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=30ab67d0-1072-4fed-bd59-8343130e1fdb, alive=false, publisher=null]]: starting 23:16:42 kafka | message.max.bytes = 1048588 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.318199706Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:42 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.264+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:42 kafka | metadata.log.dir = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.318478238Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=289.172µs 23:16:42 policy-pap | sasl.mechanism = GSSAPI 23:16:42 policy-apex-pdp | acks = -1 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:42 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.322362257Z level=info msg="Executing migration" id="Add read_only data column" 23:16:42 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:42 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:42 policy-apex-pdp | batch.size = 16384 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.326000453Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.637506ms 23:16:42 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.331870092Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:42 kafka | metadata.log.segment.bytes = 1073741824 23:16:42 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:42 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:42 policy-apex-pdp | buffer.memory = 33554432 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.332046763Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=176.691µs 23:16:42 kafka | metadata.log.segment.min.bytes = 8388608 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.335747271Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:42 kafka | metadata.log.segment.ms = 604800000 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:42 policy-apex-pdp | client.id = producer-1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.335989703Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=242.543µs 23:16:42 kafka | metadata.max.idle.interval.ms = 500 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | compression.type = none 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.340109124Z level=info msg="Executing migration" id="Add uid column" 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.34374735Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.625106ms 23:16:42 kafka | metadata.max.retention.bytes = 104857600 23:16:42 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.347321356Z level=info msg="Executing migration" id="Update uid value" 23:16:42 kafka | metadata.max.retention.ms = 604800000 23:16:42 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:42 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:42 policy-apex-pdp | enable.idempotence = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.347491228Z level=info msg="Migration successfully executed" id="Update uid value" duration=167.352µs 23:16:42 kafka | metric.reporters = [] 23:16:42 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | interceptor.classes = [] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.35272696Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:42 kafka | metrics.num.samples = 2 23:16:42 policy-pap | security.protocol = PLAINTEXT 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:42 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.353968142Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.234462ms 23:16:42 kafka | metrics.recording.level = INFO 23:16:42 policy-pap | security.providers = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | linger.ms = 0 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.357342226Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:42 kafka | metrics.sample.window.ms = 30000 23:16:42 policy-pap | send.buffer.bytes = 131072 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | max.block.ms = 60000 23:16:42 kafka | min.insync.replicas = 1 23:16:42 policy-pap | session.timeout.ms = 45000 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.35872574Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.382733ms 23:16:42 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:42 kafka | node.id = 1 23:16:42 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:42 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:42 policy-apex-pdp | max.request.size = 1048576 23:16:42 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.362715159Z level=info msg="Executing migration" id="create api_key table" 23:16:42 kafka | num.io.threads = 8 23:16:42 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:42 policy-pap | ssl.cipher.suites = null 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.363418096Z level=info msg="Migration successfully executed" id="create api_key table" duration=702.457µs 23:16:42 kafka | num.network.threads = 3 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | num.partitions = 1 23:16:42 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:42 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.36978561Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:42 policy-db-migrator | 23:16:42 kafka | num.recovery.threads.per.data.dir = 1 23:16:42 policy-apex-pdp | metric.reporters = [] 23:16:42 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.371001792Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.215502ms 23:16:42 policy-db-migrator | 23:16:42 kafka | num.replica.alter.log.dirs.threads = null 23:16:42 policy-apex-pdp | metrics.num.samples = 2 23:16:42 policy-pap | ssl.engine.factory.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.374580767Z level=info msg="Executing migration" id="add index api_key.key" 23:16:42 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:42 kafka | num.replica.fetchers = 1 23:16:42 policy-apex-pdp | metrics.recording.level = INFO 23:16:42 policy-pap | ssl.key.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.37585489Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.272443ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:42 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:42 policy-pap | ssl.keystore.certificate.chain = null 23:16:42 kafka | offset.metadata.max.bytes = 4096 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:42 kafka | offsets.commit.required.acks = -1 23:16:42 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.379467526Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:42 policy-pap | ssl.keystore.key = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | offsets.commit.timeout.ms = 5000 23:16:42 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.380335385Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=866.649µs 23:16:42 policy-pap | ssl.keystore.location = null 23:16:42 policy-db-migrator | 23:16:42 kafka | offsets.load.buffer.size = 5242880 23:16:42 policy-apex-pdp | partitioner.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.390262464Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:42 policy-pap | ssl.keystore.password = null 23:16:42 policy-db-migrator | 23:16:42 kafka | offsets.retention.check.interval.ms = 600000 23:16:42 policy-apex-pdp | partitioner.ignore.keys = false 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.391460466Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.197402ms 23:16:42 policy-pap | ssl.keystore.type = JKS 23:16:42 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:42 kafka | offsets.retention.minutes = 10080 23:16:42 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.395051411Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:42 policy-pap | ssl.protocol = TLSv1.3 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | offsets.topic.compression.codec = 0 23:16:42 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.396226413Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.174342ms 23:16:42 policy-pap | ssl.provider = null 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 kafka | offsets.topic.num.partitions = 50 23:16:42 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:42 policy-pap | ssl.secure.random.implementation = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.402611127Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:42 kafka | offsets.topic.replication.factor = 1 23:16:42 policy-apex-pdp | request.timeout.ms = 30000 23:16:42 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.403773229Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.158561ms 23:16:42 kafka | offsets.topic.segment.bytes = 104857600 23:16:42 policy-apex-pdp | retries = 2147483647 23:16:42 policy-pap | ssl.truststore.certificates = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.407623067Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:42 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:42 policy-apex-pdp | retry.backoff.ms = 100 23:16:42 policy-pap | ssl.truststore.location = null 23:16:42 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.416341834Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.717937ms 23:16:42 kafka | password.encoder.iterations = 4096 23:16:42 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:42 policy-pap | ssl.truststore.password = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.420288733Z level=info msg="Executing migration" id="create api_key table v2" 23:16:42 kafka | password.encoder.key.length = 128 23:16:42 policy-apex-pdp | sasl.jaas.config = null 23:16:42 policy-pap | ssl.truststore.type = JKS 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.42095978Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=670.677µs 23:16:42 kafka | password.encoder.keyfactory.algorithm = null 23:16:42 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.476859957Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:42 kafka | password.encoder.old.secret = null 23:16:42 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 policy-pap | 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.47815764Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.301853ms 23:16:42 kafka | password.encoder.secret = null 23:16:42 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:42 policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.482053039Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:42 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:42 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.48320947Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.155421ms 23:16:42 kafka | process.roles = [] 23:16:42 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248481784 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.487546624Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:42 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:42 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:42 policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.48913178Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.588946ms 23:16:42 kafka | producer.id.expiration.ms = 86400000 23:16:42 policy-apex-pdp | sasl.login.class = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:42.177+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.495946978Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:42 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:42 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:42.339+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.496298601Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=351.714µs 23:16:42 kafka | queued.max.request.bytes = -1 23:16:42 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:42.595+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@53917c92, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1fa796a4, org.springframework.security.web.context.SecurityContextHolderFilter@1f013047, org.springframework.security.web.header.HeaderWriterFilter@ce0bbd5, org.springframework.security.web.authentication.logout.LogoutFilter@44c2e8a8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4fbbd98c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@51566ce0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@17e6d07b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@68de8522, org.springframework.security.web.access.ExceptionTranslationFilter@1f7557fe, org.springframework.security.web.access.intercept.AuthorizationFilter@3879feec] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.499838426Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:42 kafka | queued.max.requests = 500 23:16:42 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:42 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:42 policy-pap | [2024-02-29T23:14:43.482+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.500702095Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=863.489µs 23:16:42 kafka | quota.window.num = 11 23:16:42 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:43.609+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:42 kafka | quota.window.size.seconds = 1 23:16:42 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:42 policy-pap | [2024-02-29T23:14:43.635+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:42 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.506985778Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:42 policy-pap | [2024-02-29T23:14:43.656+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:42 kafka | remote.log.manager.task.interval.ms = 30000 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.507025558Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=40.5µs 23:16:42 policy-pap | [2024-02-29T23:14:43.656+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:42 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.510858746Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:42 policy-pap | [2024-02-29T23:14:43.657+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:42 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.514861286Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.00085ms 23:16:42 policy-pap | [2024-02-29T23:14:43.657+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:42 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:42 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:42 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.518315631Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:42 policy-pap | [2024-02-29T23:14:43.657+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:42 kafka | remote.log.manager.thread.pool.size = 10 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.520728095Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.415365ms 23:16:42 policy-pap | [2024-02-29T23:14:43.658+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:42 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:42 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.524203419Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:42 policy-pap | [2024-02-29T23:14:43.658+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:42 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.524362071Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=158.382µs 23:16:42 policy-pap | [2024-02-29T23:14:43.662+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ee5900cb-eee5-431a-a953-12f2e7174bf4, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3ff3275b 23:16:42 kafka | remote.log.metadata.manager.class.path = null 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.529133968Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:42 policy-pap | [2024-02-29T23:14:43.674+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ee5900cb-eee5-431a-a953-12f2e7174bf4, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:42 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.531562813Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.428395ms 23:16:42 policy-pap | [2024-02-29T23:14:43.674+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:42 kafka | remote.log.metadata.manager.listener.name = null 23:16:42 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:42 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.53530098Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:42 policy-pap | allow.auto.create.topics = true 23:16:42 kafka | remote.log.reader.max.pending.tasks = 100 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.537856275Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.553035ms 23:16:42 policy-pap | auto.commit.interval.ms = 5000 23:16:42 kafka | remote.log.reader.threads = 10 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.541164258Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:42 policy-pap | auto.include.jmx.reporter = true 23:16:42 kafka | remote.log.storage.manager.class.name = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.541910086Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=749.158µs 23:16:42 policy-pap | auto.offset.reset = latest 23:16:42 kafka | remote.log.storage.manager.class.path = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.54733353Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:42 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:42 policy-pap | bootstrap.servers = [kafka:9092] 23:16:42 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.547873735Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=539.895µs 23:16:42 policy-apex-pdp | security.providers = null 23:16:42 policy-pap | check.crcs = true 23:16:42 kafka | remote.log.storage.system.enable = false 23:16:42 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.551644393Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:42 policy-apex-pdp | send.buffer.bytes = 131072 23:16:42 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:42 kafka | replica.fetch.backoff.ms = 1000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.552760674Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.111771ms 23:16:42 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:42 policy-pap | client.id = consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3 23:16:42 kafka | replica.fetch.max.bytes = 1048576 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.557334009Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:42 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:42 policy-pap | client.rack = 23:16:42 kafka | replica.fetch.min.bytes = 1 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.558581442Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.246633ms 23:16:42 policy-apex-pdp | ssl.cipher.suites = null 23:16:42 policy-pap | connections.max.idle.ms = 540000 23:16:42 kafka | replica.fetch.response.max.bytes = 10485760 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.563925065Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:42 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 policy-pap | default.api.timeout.ms = 60000 23:16:42 kafka | replica.fetch.wait.max.ms = 500 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.564736173Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=810.698µs 23:16:42 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:42 policy-pap | enable.auto.commit = true 23:16:42 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:42 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.56845598Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:42 policy-apex-pdp | ssl.engine.factory.class = null 23:16:42 policy-pap | exclude.internal.topics = true 23:16:42 kafka | replica.lag.time.max.ms = 30000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.56947561Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.02086ms 23:16:42 policy-apex-pdp | ssl.key.password = null 23:16:42 policy-pap | fetch.max.bytes = 52428800 23:16:42 kafka | replica.selector.class = null 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.573321729Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:42 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:42 policy-pap | fetch.max.wait.ms = 500 23:16:42 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.57345813Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=137.451µs 23:16:42 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:42 policy-pap | fetch.min.bytes = 1 23:16:42 kafka | replica.socket.timeout.ms = 30000 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.579142777Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:42 policy-apex-pdp | ssl.keystore.key = null 23:16:42 policy-pap | group.id = ee5900cb-eee5-431a-a953-12f2e7174bf4 23:16:42 kafka | replication.quota.window.num = 11 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.579204207Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=62.86µs 23:16:42 policy-apex-pdp | ssl.keystore.location = null 23:16:42 policy-pap | group.instance.id = null 23:16:42 kafka | replication.quota.window.size.seconds = 1 23:16:42 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.58349572Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:42 policy-apex-pdp | ssl.keystore.password = null 23:16:42 policy-pap | heartbeat.interval.ms = 3000 23:16:42 kafka | request.timeout.ms = 30000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.588131976Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.631376ms 23:16:42 policy-apex-pdp | ssl.keystore.type = JKS 23:16:42 policy-pap | interceptor.classes = [] 23:16:42 kafka | reserved.broker.max.id = 1000 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.592088336Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:42 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:42 policy-pap | internal.leave.group.on.close = true 23:16:42 kafka | sasl.client.callback.handler.class = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.596936454Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=4.852998ms 23:16:42 policy-apex-pdp | ssl.provider = null 23:16:42 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:42 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.602118696Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:42 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:42 policy-pap | isolation.level = read_uncommitted 23:16:42 kafka | sasl.jaas.config = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.602178876Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=60.61µs 23:16:42 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:42 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.605008865Z level=info msg="Executing migration" id="create quota table v1" 23:16:42 policy-apex-pdp | ssl.truststore.certificates = null 23:16:42 policy-pap | max.partition.fetch.bytes = 1048576 23:16:42 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.605474429Z level=info msg="Migration successfully executed" id="create quota table v1" duration=465.224µs 23:16:42 policy-apex-pdp | ssl.truststore.location = null 23:16:42 policy-pap | max.poll.interval.ms = 300000 23:16:42 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 policy-apex-pdp | ssl.truststore.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.609577861Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:42 policy-pap | max.poll.records = 500 23:16:42 kafka | sasl.kerberos.service.name = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | ssl.truststore.type = JKS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.611383388Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.804577ms 23:16:42 policy-pap | metadata.max.age.ms = 300000 23:16:42 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.616835693Z level=info msg="Executing migration" id="Update quota table charset" 23:16:42 policy-pap | metric.reporters = [] 23:16:42 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | transactional.id = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.616864653Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=30.08µs 23:16:42 policy-pap | metrics.num.samples = 2 23:16:42 kafka | sasl.login.callback.handler.class = null 23:16:42 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:42 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.621513879Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:42 policy-pap | metrics.recording.level = INFO 23:16:42 kafka | sasl.login.class = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.622256257Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=741.968µs 23:16:42 policy-pap | metrics.sample.window.ms = 30000 23:16:42 kafka | sasl.login.connect.timeout.ms = null 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.278+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.625131105Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:42 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:42 kafka | sasl.login.read.timeout.ms = null 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.300+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.626024444Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=884.439µs 23:16:42 policy-pap | receive.buffer.bytes = 65536 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.300+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.628877683Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:42 policy-pap | reconnect.backoff.max.ms = 1000 23:16:42 policy-db-migrator | 23:16:42 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248485300 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.631833672Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.955689ms 23:16:42 policy-pap | reconnect.backoff.ms = 50 23:16:42 policy-db-migrator | 23:16:42 kafka | sasl.login.refresh.window.factor = 0.8 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=30ab67d0-1072-4fed-bd59-8343130e1fdb, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.63666208Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:42 policy-pap | request.timeout.ms = 30000 23:16:42 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:42 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.636687921Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.731µs 23:16:42 policy-pap | retry.backoff.ms = 100 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.640015494Z level=info msg="Executing migration" id="create session table" 23:16:42 policy-pap | sasl.client.callback.handler.class = null 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:42 kafka | sasl.login.retry.backoff.ms = 100 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.304+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.640805832Z level=info msg="Migration successfully executed" id="create session table" duration=790.068µs 23:16:42 policy-pap | sasl.jaas.config = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.304+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.648709271Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:42 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 policy-db-migrator | 23:16:42 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.648852122Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=138.892µs 23:16:42 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 policy-db-migrator | 23:16:42 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:42 policy-pap | sasl.kerberos.service.name = null 23:16:42 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:42 kafka | sasl.oauthbearer.expected.audience = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.654358787Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:42 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | sasl.oauthbearer.expected.issuer = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.654446218Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=82.631µs 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:42 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.657525198Z level=info msg="Executing migration" id="create playlist table v2" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:42 policy-pap | sasl.login.callback.handler.class = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.658288546Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=765.928µs 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.319+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:42 policy-pap | sasl.login.class = null 23:16:42 policy-db-migrator | 23:16:42 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.661840041Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.335+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:42 policy-pap | sasl.login.connect.timeout.ms = null 23:16:42 policy-db-migrator | 23:16:42 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.66266309Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=822.409µs 23:16:42 policy-apex-pdp | [] 23:16:42 policy-pap | sasl.login.read.timeout.ms = null 23:16:42 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:42 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.666214905Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.338+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:42 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:42 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.666244515Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=30.31µs 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"bd7b6aa1-3559-4908-8969-b03734dbc54b","timestampMs":1709248485318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:42 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.67074645Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.590+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:42 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:42 kafka | sasl.server.callback.handler.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.670774731Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=29.301µs 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.591+00:00|INFO|ServiceManager|main] service manager starting 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:42 kafka | sasl.server.max.receive.size = 524288 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.591+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.68677336Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:42 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.692248705Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.473275ms 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.591+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 kafka | security.providers = null 23:16:42 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:42 policy-pap | sasl.mechanism = GSSAPI 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.700535657Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.619+00:00|INFO|ServiceManager|main] service manager started 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.702753459Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.216912ms 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.619+00:00|INFO|ServiceManager|main] service manager started 23:16:42 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:42 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.707245584Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.620+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:42 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:42 kafka | socket.connection.setup.timeout.ms = 10000 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.619+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.707414206Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=168.102µs 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 kafka | socket.listen.backlog.size = 50 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.726+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FqFLOU6jRgiQltXq-uD-BA 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.709966811Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 kafka | socket.receive.buffer.bytes = 102400 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.726+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Cluster ID: FqFLOU6jRgiQltXq-uD-BA 23:16:42 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.710049792Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=82.921µs 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 kafka | socket.request.max.bytes = 104857600 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.728+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.713260174Z level=info msg="Executing migration" id="create preferences table v3" 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 kafka | socket.send.buffer.bytes = 102400 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.728+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.714093992Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=833.698µs 23:16:42 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:42 kafka | ssl.cipher.suites = [] 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.737+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] (Re-)joining group 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.719566997Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:42 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:42 kafka | ssl.client.auth = none 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.719595247Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=28.55µs 23:16:42 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:42 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.753+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Request joining group due to: need to re-join with the given member-id: consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.730451206Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:42 policy-pap | security.protocol = PLAINTEXT 23:16:42 kafka | ssl.endpoint.identification.algorithm = https 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.754+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:42 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:42 policy-pap | security.providers = null 23:16:42 policy-apex-pdp | [2024-02-29T23:14:45.754+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] (Re-)joining group 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.736414665Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.962259ms 23:16:42 kafka | ssl.engine.factory.class = null 23:16:42 policy-pap | send.buffer.bytes = 131072 23:16:42 policy-apex-pdp | [2024-02-29T23:14:46.296+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.740464685Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:42 policy-apex-pdp | [2024-02-29T23:14:46.298+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.740739048Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=274.823µs 23:16:42 kafka | ssl.key.password = null 23:16:42 policy-pap | session.timeout.ms = 45000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.744761258Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:42 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:14:48.761+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025', protocol='range'} 23:16:42 kafka | ssl.keymanager.algorithm = SunX509 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.747897169Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.134461ms 23:16:42 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:14:48.768+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Finished assignment for group at generation 1: {consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025=Assignment(partitions=[policy-pdp-pap-0])} 23:16:42 kafka | ssl.keystore.certificate.chain = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.752867179Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:42 policy-pap | ssl.cipher.suites = null 23:16:42 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:42 policy-apex-pdp | [2024-02-29T23:14:48.802+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025', protocol='range'} 23:16:42 kafka | ssl.keystore.key = null 23:16:42 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 policy-apex-pdp | [2024-02-29T23:14:48.802+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:42 kafka | ssl.keystore.location = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.756263923Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.396744ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:42 kafka | ssl.keystore.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.760819688Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:42 policy-apex-pdp | [2024-02-29T23:14:48.804+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Adding newly assigned partitions: policy-pdp-pap-0 23:16:42 policy-pap | ssl.engine.factory.class = null 23:16:42 kafka | ssl.keystore.type = JKS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.760885209Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=66.171µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.key.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.765909369Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:14:48.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Found no committed offset for partition policy-pdp-pap-0 23:16:42 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:42 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.766798778Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=886.409µs 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:14:48.819+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:42 kafka | ssl.protocol = TLSv1.3 23:16:42 policy-pap | ssl.keystore.certificate.chain = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.772557965Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:42 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:42 policy-apex-pdp | [2024-02-29T23:14:56.171+00:00|INFO|RequestLog|qtp1068445309-32] 172.17.0.2 - policyadmin [29/Feb/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10653 "-" "Prometheus/2.50.1" 23:16:42 kafka | ssl.provider = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.773694997Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.136512ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.319+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:42 policy-pap | ssl.keystore.key = null 23:16:42 kafka | ssl.secure.random.implementation = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.783306002Z level=info msg="Executing migration" id="create alert table v1" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 policy-pap | ssl.keystore.location = null 23:16:42 kafka | ssl.trustmanager.algorithm = PKIX 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.343+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 kafka | ssl.truststore.certificates = null 23:16:42 policy-pap | ssl.keystore.password = null 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 policy-db-migrator | 23:16:42 kafka | ssl.truststore.location = null 23:16:42 policy-pap | ssl.keystore.type = JKS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.784548765Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.246783ms 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.346+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:42 policy-db-migrator | 23:16:42 kafka | ssl.truststore.password = null 23:16:42 policy-pap | ssl.protocol = TLSv1.3 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.793980959Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.518+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:42 kafka | ssl.truststore.type = JKS 23:16:42 policy-pap | ssl.provider = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.795839087Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.857448ms 23:16:42 policy-apex-pdp | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:42 policy-pap | ssl.secure.random.implementation = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.806398763Z level=info msg="Executing migration" id="add index alert state" 23:16:42 kafka | transaction.max.timeout.ms = 900000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.807759286Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.361693ms 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.528+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:42 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:42 kafka | transaction.partition.verification.enable = true 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.528+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.truststore.certificates = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.811571464Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 policy-pap | ssl.truststore.location = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.812820997Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.255173ms 23:16:42 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.529+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.816594754Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:42 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.truststore.password = null 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.8171656Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=570.556µs 23:16:42 kafka | transaction.state.log.min.isr = 2 23:16:42 policy-pap | ssl.truststore.type = JKS 23:16:42 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.821536253Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:42 kafka | transaction.state.log.num.partitions = 50 23:16:42 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.547+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.822394362Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=857.679µs 23:16:42 kafka | transaction.state.log.replication.factor = 3 23:16:42 policy-pap | 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.825951368Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:42 kafka | transaction.state.log.segment.bytes = 104857600 23:16:42 policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.547+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.826541923Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=590.475µs 23:16:42 kafka | transactional.id.expiration.ms = 604800000 23:16:42 policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.556+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.830401352Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:42 kafka | unclean.leader.election.enable = false 23:16:42 policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483681 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.842504883Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.10274ms 23:16:42 kafka | unstable.api.versions.enable = false 23:16:42 policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Subscribed to topic(s): policy-pdp-pap 23:16:42 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.556+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.847439532Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:42 kafka | zookeeper.clientCnxnSocket = null 23:16:42 policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.601+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.847889436Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=449.474µs 23:16:42 policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=90a6ce0d-d2c8-411b-a6c6-dec263368d9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2ea0161f 23:16:42 policy-apex-pdp | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.85130506Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:42 kafka | zookeeper.connect = zookeeper:2181 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=90a6ce0d-d2c8-411b-a6c6-dec263368d9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.851948067Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=643.107µs 23:16:42 kafka | zookeeper.connection.timeout.ms = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.604+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:42 policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.855452462Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:42 kafka | zookeeper.max.in.flight.requests = 10 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-pap | allow.auto.create.topics = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.855869706Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=417.084µs 23:16:42 kafka | zookeeper.metadata.migration.enable = false 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.616+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-pap | auto.commit.interval.ms = 5000 23:16:42 kafka | zookeeper.session.timeout.ms = 18000 23:16:42 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.861077938Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:42 policy-pap | auto.include.jmx.reporter = true 23:16:42 kafka | zookeeper.set.acl = false 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.617+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.862132878Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.05477ms 23:16:42 policy-pap | auto.offset.reset = latest 23:16:42 kafka | zookeeper.ssl.cipher.suites = null 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.636+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.866023377Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:42 policy-pap | bootstrap.servers = [kafka:9092] 23:16:42 kafka | zookeeper.ssl.client.enable = false 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-apex-pdp | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.867070978Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.046961ms 23:16:42 kafka | zookeeper.ssl.crl.enable = false 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.642+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.906048616Z level=info msg="Executing migration" id="Add column is_default" 23:16:42 policy-pap | check.crcs = true 23:16:42 policy-db-migrator | 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:42 policy-db-migrator | 23:16:42 kafka | zookeeper.ssl.enabled.protocols = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.911560821Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.512525ms 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.655+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-pap | client.id = consumer-policy-pap-4 23:16:42 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:42 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.918855164Z level=info msg="Executing migration" id="Add column frequency" 23:16:42 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-pap | client.rack = 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | zookeeper.ssl.keystore.location = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.921296698Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.440824ms 23:16:42 policy-apex-pdp | [2024-02-29T23:15:05.655+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:42 policy-pap | connections.max.idle.ms = 540000 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:42 kafka | zookeeper.ssl.keystore.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.924933385Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:42 policy-apex-pdp | [2024-02-29T23:15:56.084+00:00|INFO|RequestLog|qtp1068445309-29] 172.17.0.2 - policyadmin [29/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.50.1" 23:16:42 policy-pap | default.api.timeout.ms = 60000 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | zookeeper.ssl.keystore.type = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.930425399Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.491044ms 23:16:42 policy-pap | enable.auto.commit = true 23:16:42 policy-db-migrator | 23:16:42 kafka | zookeeper.ssl.ocsp.enable = false 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.934383809Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:42 policy-pap | exclude.internal.topics = true 23:16:42 policy-db-migrator | 23:16:42 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.937795713Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.417684ms 23:16:42 policy-pap | fetch.max.bytes = 52428800 23:16:42 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:42 kafka | zookeeper.ssl.truststore.location = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.943113796Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:42 policy-pap | fetch.max.wait.ms = 500 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.943970904Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=855.888µs 23:16:42 policy-pap | fetch.min.bytes = 1 23:16:42 kafka | zookeeper.ssl.truststore.password = null 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.949894553Z level=info msg="Executing migration" id="Update alert table charset" 23:16:42 policy-pap | group.id = policy-pap 23:16:42 kafka | zookeeper.ssl.truststore.type = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.949972804Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=73.031µs 23:16:42 policy-pap | group.instance.id = null 23:16:42 kafka | (kafka.server.KafkaConfig) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.954090635Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:42 policy-pap | heartbeat.interval.ms = 3000 23:16:42 kafka | [2024-02-29 23:14:15,079] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.954139996Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=42.87µs 23:16:42 policy-pap | interceptor.classes = [] 23:16:42 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.957252567Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:42 kafka | [2024-02-29 23:14:15,080] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:42 policy-pap | internal.leave.group.on.close = true 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.958132415Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=879.898µs 23:16:42 kafka | [2024-02-29 23:14:15,081] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:42 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.964535699Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:42 kafka | [2024-02-29 23:14:15,085] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:42 policy-pap | isolation.level = read_uncommitted 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.966304857Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.768608ms 23:16:42 kafka | [2024-02-29 23:14:15,119] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:42 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.970100775Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:42 kafka | [2024-02-29 23:14:15,123] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:42 policy-pap | max.partition.fetch.bytes = 1048576 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.971283937Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.182552ms 23:16:42 kafka | [2024-02-29 23:14:15,132] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) 23:16:42 policy-pap | max.poll.interval.ms = 300000 23:16:42 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.977714101Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:42 kafka | [2024-02-29 23:14:15,134] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:42 policy-pap | max.poll.records = 500 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.979041354Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.331293ms 23:16:42 kafka | [2024-02-29 23:14:15,135] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.982916762Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:42 kafka | [2024-02-29 23:14:15,147] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:42 policy-pap | metadata.max.age.ms = 300000 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.983890792Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=973.85µs 23:16:42 kafka | [2024-02-29 23:14:15,194] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:42 policy-pap | metric.reporters = [] 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.988763311Z level=info msg="Executing migration" id="Add for to alert table" 23:16:42 kafka | [2024-02-29 23:14:15,245] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:42 policy-pap | metrics.num.samples = 2 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.992598639Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.834988ms 23:16:42 kafka | [2024-02-29 23:14:15,297] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:42 policy-pap | metrics.recording.level = INFO 23:16:42 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:03.996005913Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:42 kafka | [2024-02-29 23:14:15,331] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:42 policy-pap | metrics.sample.window.ms = 30000 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:15,718] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.000012673Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.00721ms 23:16:42 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:42 kafka | [2024-02-29 23:14:15,740] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.004363176Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:42 policy-pap | receive.buffer.bytes = 65536 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:15,741] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.004539878Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=176.802µs 23:16:42 policy-pap | reconnect.backoff.max.ms = 1000 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,746] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.007751276Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:42 policy-pap | reconnect.backoff.ms = 50 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,751] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.008582833Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=831.257µs 23:16:42 policy-pap | request.timeout.ms = 30000 23:16:42 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:42 kafka | [2024-02-29 23:14:15,776] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.014880848Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:42 policy-pap | retry.backoff.ms = 100 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:15,778] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.015685635Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=804.317µs 23:16:42 policy-pap | sasl.client.callback.handler.class = null 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 kafka | [2024-02-29 23:14:15,782] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.019203545Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:42 policy-pap | sasl.jaas.config = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:15,784] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.023026608Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.822763ms 23:16:42 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,785] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.029825926Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:42 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,801] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.029946267Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=120.151µs 23:16:42 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:42 kafka | [2024-02-29 23:14:15,802] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:42 policy-pap | sasl.kerberos.service.name = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.046079376Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:15,826] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:42 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.046817942Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=738.756µs 23:16:42 kafka | [2024-02-29 23:14:15,857] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1709248455845,1709248455845,1,0,0,72057609446883329,258,0,27 23:16:42 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.051443732Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:42 policy-pap | sasl.login.callback.handler.class = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | (kafka.zk.KafkaZkClient) 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,858] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.052132738Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=686.706µs 23:16:42 policy-pap | sasl.login.class = null 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,915] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.061602399Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:42 policy-pap | sasl.login.connect.timeout.ms = null 23:16:42 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:42 kafka | [2024-02-29 23:14:15,924] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 policy-pap | sasl.login.read.timeout.ms = null 23:16:42 kafka | [2024-02-29 23:14:15,930] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.06168022Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=79.581µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:42 kafka | [2024-02-29 23:14:15,931] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.069571318Z level=info msg="Executing migration" id="create annotation table v5" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:42 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:42 kafka | [2024-02-29 23:14:15,945] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.070206883Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=637.575µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:42 kafka | [2024-02-29 23:14:15,947] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.074306168Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:42 kafka | [2024-02-29 23:14:15,958] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.075613709Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.307671ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.082018754Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:42 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:42 kafka | [2024-02-29 23:14:15,960] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:42 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.083131444Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.1123ms 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:15,963] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:42 policy-pap | sasl.mechanism = GSSAPI 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.090308516Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:42 kafka | [2024-02-29 23:14:15,968] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:42 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.091244494Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=862.037µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:15,982] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:42 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.09554701Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,987] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:42 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.096254567Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=706.817µs 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:15,987] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.100663244Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:42 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:42 kafka | [2024-02-29 23:14:16,000] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.101374471Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=711.036µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:16,000] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.11175494Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 kafka | [2024-02-29 23:14:16,006] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.11180795Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=54.06µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:16,010] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:42 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.115518902Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:16,013] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:42 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.120616165Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.098613ms 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:16,031] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:42 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.12584368Z level=info msg="Executing migration" id="Drop category_id index" 23:16:42 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:42 kafka | [2024-02-29 23:14:16,036] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:42 policy-pap | security.protocol = PLAINTEXT 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.126676247Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=831.947µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:16,039] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:42 policy-pap | security.providers = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.133108262Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:42 kafka | [2024-02-29 23:14:16,042] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:42 policy-pap | send.buffer.bytes = 131072 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.137170297Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.060165ms 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:16,060] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:42 policy-pap | session.timeout.ms = 45000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.143451541Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:16,061] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:42 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.144063266Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=611.335µs 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:16,061] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:42 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.1480346Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:42 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:42 policy-pap | ssl.cipher.suites = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.149443842Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.408942ms 23:16:42 kafka | [2024-02-29 23:14:16,062] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.156333111Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:42 kafka | [2024-02-29 23:14:16,062] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:42 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.157829454Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.496053ms 23:16:42 kafka | [2024-02-29 23:14:16,065] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.engine.factory.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.162655915Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:42 kafka | [2024-02-29 23:14:16,065] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.key.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.179667111Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=17.012306ms 23:16:42 kafka | [2024-02-29 23:14:16,065] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:42 kafka | [2024-02-29 23:14:16,066] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:42 policy-pap | ssl.keystore.certificate.chain = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.18421557Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:42 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:42 kafka | [2024-02-29 23:14:16,066] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.184700284Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=484.624µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:16,071] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:42 policy-pap | ssl.keystore.key = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.189693967Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:42 kafka | [2024-02-29 23:14:16,083] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:42 policy-pap | ssl.keystore.location = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.190539524Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=846.797µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.keystore.password = null 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:16,084] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.197554104Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:16,089] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.198051448Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=497.084µs 23:16:42 policy-pap | ssl.keystore.type = JKS 23:16:42 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:42 policy-pap | ssl.protocol = TLSv1.3 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:16,094] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.204214171Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:42 policy-pap | ssl.provider = null 23:16:42 kafka | [2024-02-29 23:14:16,095] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.205055218Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=839.667µs 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:42 policy-pap | ssl.secure.random.implementation = null 23:16:42 kafka | [2024-02-29 23:14:16,095] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.212023648Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:42 kafka | [2024-02-29 23:14:16,096] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.21231971Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=294.192µs 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.truststore.certificates = null 23:16:42 kafka | [2024-02-29 23:14:16,099] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.217048091Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:16,099] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:42 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:42 policy-pap | ssl.truststore.location = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.222990331Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.94329ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.truststore.password = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.227169217Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:42 kafka | [2024-02-29 23:14:16,100] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 policy-pap | ssl.truststore.type = JKS 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.233851295Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.683138ms 23:16:42 kafka | [2024-02-29 23:14:16,101] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.238650445Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:42 kafka | [2024-02-29 23:14:16,102] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.239281131Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=630.056µs 23:16:42 kafka | [2024-02-29 23:14:16,106] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.246937487Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:42 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 23:16:42 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.248343129Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.405022ms 23:16:42 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483687 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.253379772Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:42 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.253779625Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=396.673µs 23:16:42 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.260661514Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:42 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.267135909Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.473715ms 23:16:42 kafka | [2024-02-29 23:14:16,112] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=90a6ce0d-d2c8-411b-a6c6-dec263368d9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.271740789Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:42 kafka | [2024-02-29 23:14:16,112] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ee5900cb-eee5-431a-a953-12f2e7174bf4, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:42 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.272369924Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=628.925µs 23:16:42 kafka | [2024-02-29 23:14:16,112] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:42 policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f77f3cef-1815-4905-9b5a-40d6087ec71b, alive=false, publisher=null]]: starting 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.279874738Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:42 kafka | [2024-02-29 23:14:16,113] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:42 policy-pap | [2024-02-29T23:14:43.706+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.280138301Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=264.013µs 23:16:42 kafka | [2024-02-29 23:14:16,113] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:42 policy-pap | acks = -1 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.336883536Z level=info msg="Executing migration" id="Move region to single row" 23:16:42 kafka | [2024-02-29 23:14:16,114] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:42 policy-pap | auto.include.jmx.reporter = true 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.337924695Z level=info msg="Migration successfully executed" id="Move region to single row" duration=1.033879ms 23:16:42 kafka | [2024-02-29 23:14:16,115] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:42 policy-pap | batch.size = 16384 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.344398881Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:42 kafka | [2024-02-29 23:14:16,116] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:42 policy-pap | bootstrap.servers = [kafka:9092] 23:16:42 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.346169546Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.771185ms 23:16:42 kafka | [2024-02-29 23:14:16,128] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:42 policy-pap | buffer.memory = 33554432 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.351342Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:42 kafka | [2024-02-29 23:14:16,128] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:42 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.352257518Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=906.288µs 23:16:42 kafka | [2024-02-29 23:14:16,129] INFO Kafka startTimeMs: 1709248456123 (org.apache.kafka.common.utils.AppInfoParser) 23:16:42 policy-pap | client.id = producer-1 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:16,147] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:42 policy-pap | compression.type = none 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.356308102Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:42 kafka | [2024-02-29 23:14:16,179] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.357301191Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=992.489µs 23:16:42 policy-pap | connections.max.idle.ms = 540000 23:16:42 kafka | [2024-02-29 23:14:16,215] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.365864464Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:42 policy-pap | delivery.timeout.ms = 120000 23:16:42 kafka | [2024-02-29 23:14:16,285] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.367295827Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.429963ms 23:16:42 policy-pap | enable.idempotence = true 23:16:42 kafka | [2024-02-29 23:14:16,354] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.372843254Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:42 policy-pap | interceptor.classes = [] 23:16:42 kafka | [2024-02-29 23:14:16,358] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.374292266Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.448172ms 23:16:42 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:42 kafka | [2024-02-29 23:14:21,181] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.380474619Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:42 policy-pap | linger.ms = 0 23:16:42 kafka | [2024-02-29 23:14:21,181] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.381988832Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.514423ms 23:16:42 policy-pap | max.block.ms = 60000 23:16:42 kafka | [2024-02-29 23:14:44,341] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.389539137Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:42 policy-pap | max.in.flight.requests.per.connection = 5 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.389677928Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=139.551µs 23:16:42 kafka | [2024-02-29 23:14:44,342] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:42 policy-pap | max.request.size = 1048576 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.395507558Z level=info msg="Executing migration" id="create test_data table" 23:16:42 kafka | [2024-02-29 23:14:44,354] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:42 policy-pap | metadata.max.age.ms = 300000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.396777409Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.270651ms 23:16:42 kafka | [2024-02-29 23:14:44,363] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:42 policy-pap | metadata.max.idle.ms = 300000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.40152366Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:42 kafka | [2024-02-29 23:14:44,388] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(j4DaYO3UQ1iVwjuKp7Abhw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(Fk26_aqxRF-nlCfGN2xAXQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:42 policy-pap | metric.reporters = [] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.40275063Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.227381ms 23:16:42 kafka | [2024-02-29 23:14:44,390] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:42 policy-pap | metrics.num.samples = 2 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.412405513Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,406] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | metrics.recording.level = INFO 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.413819215Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.412072ms 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | metrics.sample.window.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.421006066Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:42 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:42 kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.422883992Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.876166ms 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | partitioner.availability.timeout.ms = 0 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.426474203Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:42 kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | partitioner.class = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.426944677Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=469.794µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | partitioner.ignore.keys = false 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.431327155Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | receive.buffer.bytes = 32768 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.431721478Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=393.593µs 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | reconnect.backoff.max.ms = 1000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.440962467Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:42 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:42 kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | reconnect.backoff.ms = 50 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.441082378Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=120.171µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | request.timeout.ms = 30000 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.44711937Z level=info msg="Executing migration" id="create team table" 23:16:42 kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | retries = 2147483647 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.44823858Z level=info msg="Migration successfully executed" id="create team table" duration=1.116889ms 23:16:42 kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | retry.backoff.ms = 100 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.455510211Z level=info msg="Executing migration" id="add index team.org_id" 23:16:42 kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.client.callback.handler.class = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.457105605Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.585194ms 23:16:42 kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.jaas.config = null 23:16:42 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.466841939Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:42 kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.468291471Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.446622ms 23:16:42 kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.474330193Z level=info msg="Executing migration" id="Add column uid in team" 23:16:42 kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.kerberos.service.name = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.479009033Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.67894ms 23:16:42 kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.483797814Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:42 kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.483977215Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=179.181µs 23:16:42 kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.callback.handler.class = null 23:16:42 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.489991817Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:42 kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.class = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.490917895Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=925.548µs 23:16:42 kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.connect.timeout.ms = null 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.498841122Z level=info msg="Executing migration" id="create team member table" 23:16:42 kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.read.timeout.ms = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.500091153Z level=info msg="Migration successfully executed" id="create team member table" duration=1.250451ms 23:16:42 kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.506558559Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:42 kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.508279583Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.708774ms 23:16:42 kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:42 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.512418009Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:42 kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.513423787Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.005018ms 23:16:42 kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.519330178Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:42 kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.521095943Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.764955ms 23:16:42 kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.mechanism = GSSAPI 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.526512349Z level=info msg="Executing migration" id="Add column email to team table" 23:16:42 kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.531553302Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.039893ms 23:16:42 kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:42 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.536229632Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:42 kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.540918833Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.688701ms 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.545488082Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:42 kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.550372094Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.882872ms 23:16:42 kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.555408367Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:42 kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.556347495Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=939.069µs 23:16:42 kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.561982003Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:42 kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:42 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.563707048Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.734185ms 23:16:42 kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.570559446Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:42 kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:42 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.572319751Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.759425ms 23:16:42 kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | security.protocol = PLAINTEXT 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.579872586Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | security.providers = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.58144516Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.571803ms 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | send.buffer.bytes = 131072 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.593266361Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:42 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.595139627Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.872536ms 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.601472461Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:42 policy-pap | ssl.cipher.suites = null 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.603112695Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.640014ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.608108528Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.engine.factory.class = null 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.609138647Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.029199ms 23:16:42 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:42 policy-pap | ssl.key.password = null 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.61538024Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:42 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.616298378Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=917.698µs 23:16:42 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:42 policy-pap | ssl.keystore.certificate.chain = null 23:16:42 kafka | [2024-02-29 23:14:44,417] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.622293489Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.keystore.key = null 23:16:42 kafka | [2024-02-29 23:14:44,423] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.622888684Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=590.075µs 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.keystore.location = null 23:16:42 kafka | [2024-02-29 23:14:44,423] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.630270108Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.keystore.password = null 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.630647421Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=376.243µs 23:16:42 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:42 policy-pap | ssl.keystore.type = JKS 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.637445949Z level=info msg="Executing migration" id="create tag table" 23:16:42 policy-pap | ssl.protocol = TLSv1.3 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.638522448Z level=info msg="Migration successfully executed" id="create tag table" duration=1.076079ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.provider = null 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.645817131Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:42 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:42 policy-pap | ssl.secure.random.implementation = null 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.646738948Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=921.337µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.653525736Z level=info msg="Executing migration" id="create login attempt table" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.truststore.certificates = null 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.654599615Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.068829ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | ssl.truststore.location = null 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.65982829Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:42 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:42 policy-pap | ssl.truststore.password = null 23:16:42 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | ssl.truststore.type = JKS 23:16:42 kafka | [2024-02-29 23:14:44,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-pap | transaction.timeout.ms = 60000 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.661329543Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.500543ms 23:16:42 kafka | [2024-02-29 23:14:44,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | transactional.id = null 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.670851685Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:42 kafka | [2024-02-29 23:14:44,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.672207696Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.355751ms 23:16:42 kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.680004013Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:42 kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:42 policy-pap | [2024-02-29T23:14:43.718+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.702350464Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=22.354091ms 23:16:42 kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.70765462Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:42 kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.708562828Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=907.978µs 23:16:42 kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483734 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.714032974Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:42 policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f77f3cef-1815-4905-9b5a-40d6087ec71b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.714947842Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=916.378µs 23:16:42 kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:43.735+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44732de5-846e-4467-bb27-5d73124fde9c, alive=false, publisher=null]]: starting 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.721896082Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:42 kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:43.735+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.722565797Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=669.425µs 23:16:42 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:42 policy-pap | acks = -1 23:16:42 kafka | [2024-02-29 23:14:44,431] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.728611509Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | auto.include.jmx.reporter = true 23:16:42 kafka | [2024-02-29 23:14:44,431] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.72983679Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.225781ms 23:16:42 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-pap | batch.size = 16384 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.797857552Z level=info msg="Executing migration" id="create user auth table" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | bootstrap.servers = [kafka:9092] 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.799120973Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.263911ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | buffer.memory = 33554432 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.805073234Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.806576637Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.505523ms 23:16:42 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:42 policy-pap | client.id = producer-2 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.814157461Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | compression.type = none 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.814269072Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=101.571µs 23:16:42 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-pap | connections.max.idle.ms = 540000 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.821137371Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | delivery.timeout.ms = 120000 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.828169551Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.03266ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | enable.idempotence = true 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.834625206Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | interceptor.classes = [] 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.839997422Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.371896ms 23:16:42 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:42 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.845776142Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | linger.ms = 0 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.851066947Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.290275ms 23:16:42 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-pap | max.block.ms = 60000 23:16:42 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.855781917Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | max.in.flight.requests.per.connection = 5 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.861066613Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.284096ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | max.request.size = 1048576 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.864916726Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | metadata.max.age.ms = 300000 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.866091726Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.18143ms 23:16:42 policy-pap | metadata.max.idle.ms = 300000 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.870437673Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:42 policy-pap | metric.reporters = [] 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.875468206Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.030273ms 23:16:42 policy-pap | metrics.num.samples = 2 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.882539617Z level=info msg="Executing migration" id="create server_lock table" 23:16:42 policy-pap | metrics.recording.level = INFO 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.883473535Z level=info msg="Migration successfully executed" id="create server_lock table" duration=932.808µs 23:16:42 policy-pap | metrics.sample.window.ms = 30000 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.887739151Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:42 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.889319815Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.572284ms 23:16:42 policy-pap | partitioner.availability.timeout.ms = 0 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.894111336Z level=info msg="Executing migration" id="create user auth token table" 23:16:42 policy-pap | partitioner.class = null 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.895139445Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.029099ms 23:16:42 policy-pap | partitioner.ignore.keys = false 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.901010955Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:42 policy-pap | receive.buffer.bytes = 32768 23:16:42 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.902049844Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.038859ms 23:16:42 policy-pap | reconnect.backoff.max.ms = 1000 23:16:42 kafka | [2024-02-29 23:14:44,434] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.910211134Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:42 policy-pap | reconnect.backoff.ms = 50 23:16:42 kafka | [2024-02-29 23:14:44,434] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.911218402Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.007088ms 23:16:42 policy-pap | request.timeout.ms = 30000 23:16:42 kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.917003672Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:42 policy-pap | retries = 2147483647 23:16:42 kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.918751046Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.746804ms 23:16:42 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-pap | retry.backoff.ms = 100 23:16:42 kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.925055141Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.client.callback.handler.class = null 23:16:42 kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.932447094Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.392413ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.jaas.config = null 23:16:42 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.937540727Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:42 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.938642657Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.10121ms 23:16:42 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:42 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:42 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.943857771Z level=info msg="Executing migration" id="create cache_data table" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.kerberos.service.name = null 23:16:42 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.944702309Z level=info msg="Migration successfully executed" id="create cache_data table" duration=847.578µs 23:16:42 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:42 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.951035233Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:42 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.952619687Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.563023ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.login.callback.handler.class = null 23:16:42 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.959139032Z level=info msg="Executing migration" id="create short_url table v1" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | sasl.login.class = null 23:16:42 kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.960446213Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.302041ms 23:16:42 policy-pap | sasl.login.connect.timeout.ms = null 23:16:42 kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.966522806Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:42 policy-pap | sasl.login.read.timeout.ms = null 23:16:42 kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.967997128Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.473792ms 23:16:42 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:42 kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.982614773Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:42 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.982759544Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=145.011µs 23:16:42 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.989367011Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:42 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:42 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:42 kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.989548493Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=191.602µs 23:16:42 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.99505483Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:42 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:42 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:42 kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:04.996653833Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.598453ms 23:16:42 policy-pap | sasl.mechanism = GSSAPI 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.002454593Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:42 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.004061087Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.606054ms 23:16:42 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.01245381Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:42 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:42 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:42 kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.013566501Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.112571ms 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.01751219Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:42 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:42 kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.017672301Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=160.881µs 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.021525819Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:42 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.023206926Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.679987ms 23:16:42 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.033000673Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:42 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:42 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.034076564Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.074311ms 23:16:42 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.041517917Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:42 policy-pap | security.protocol = PLAINTEXT 23:16:42 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.043538507Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=2.01121ms 23:16:42 policy-pap | security.providers = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.048651078Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:42 policy-pap | send.buffer.bytes = 131072 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.050006121Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.354543ms 23:16:42 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.056574137Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:42 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:42 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.066105552Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.532474ms 23:16:42 policy-pap | ssl.cipher.suites = null 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.071318983Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:42 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:42 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.072712147Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.392484ms 23:16:42 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.078771648Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:42 policy-pap | ssl.engine.factory.class = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.078888729Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=117.641µs 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.key.password = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.082508065Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:42 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.084121641Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.612206ms 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.keystore.certificate.chain = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.09098817Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.keystore.key = null 23:16:42 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.092653456Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.670466ms 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.keystore.location = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.100817058Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.keystore.password = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.101873058Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.0619ms 23:16:42 kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.keystore.type = JKS 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.109966939Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:42 kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.protocol = TLSv1.3 23:16:42 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.1100704Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=103.531µs 23:16:42 kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.provider = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.115459484Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:42 kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.secure.random.implementation = null 23:16:42 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.116965729Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.506555ms 23:16:42 kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.123262362Z level=info msg="Executing migration" id="create alert_instance table" 23:16:42 kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:42 policy-pap | ssl.truststore.certificates = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.124629935Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.367414ms 23:16:42 kafka | [2024-02-29 23:14:44,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:42 policy-pap | ssl.truststore.location = null 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.13007223Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:42 kafka | [2024-02-29 23:14:44,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:42 policy-pap | ssl.truststore.password = null 23:16:42 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.131600715Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.525755ms 23:16:42 kafka | [2024-02-29 23:14:44,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:42 policy-pap | ssl.truststore.type = JKS 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.137548384Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:42 policy-pap | transaction.timeout.ms = 60000 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.138646675Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.099551ms 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:42 policy-pap | transactional.id = null 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.148985038Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:42 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:42 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.157965938Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.97869ms 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:42 policy-pap | 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.163730195Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:42 policy-pap | [2024-02-29T23:14:43.736+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:42 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.164776786Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.046261ms 23:16:42 policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.172947307Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:42 policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.173990288Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.042811ms 23:16:42 kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483738 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.180109959Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44732de5-846e-4467-bb27-5d73124fde9c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:42 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.218989557Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=38.879988ms 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.223182819Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:42 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.258774304Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=35.586504ms 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.740+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.263259698Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.742+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.264841224Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.580926ms 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.744+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.270715313Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.744+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:42 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.272647452Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.930989ms 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.746+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.280001135Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:42 kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.746+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:42 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.288886964Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.892439ms 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.747+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:42 policy-db-migrator | JOIN pdpstatistics b 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.295431129Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.747+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:42 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.304726352Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=9.287473ms 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.748+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:42 policy-db-migrator | SET a.id = b.id 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.311374628Z level=info msg="Executing migration" id="create alert_rule table" 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:43.749+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.778 seconds (process running for 12.492) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.313093175Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.722897ms 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.289+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: FqFLOU6jRgiQltXq-uD-BA 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.324417499Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.291+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FqFLOU6jRgiQltXq-uD-BA 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.325644131Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.225592ms 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.294+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:42 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.332065895Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:42 kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.294+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Cluster ID: FqFLOU6jRgiQltXq-uD-BA 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.333515269Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.449884ms 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.397+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.339124875Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.398+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: FqFLOU6jRgiQltXq-uD-BA 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.340287397Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.157752ms 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.418+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.352005544Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.438+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.352178875Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=177.891µs 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.452+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:42 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.355948273Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.520+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.362298987Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.350394ms 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.531+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.368432978Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.636+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.374806461Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.373093ms 23:16:42 kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.647+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.378482618Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.751+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.38474523Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.262192ms 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.753+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.389539958Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.857+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.390891892Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.351584ms 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.860+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.396201115Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.963+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.397296746Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.095231ms 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:44.964+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:45.068+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.400530218Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:42 kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:45.071+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.406694279Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.163181ms 23:16:42 kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:45.175+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.411232695Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:42 kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:45.187+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.417119574Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.886119ms 23:16:42 kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:45.281+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.424681459Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:42 kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:45.295+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 policy-db-migrator | 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:45.387+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.42575644Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.071531ms 23:16:42 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:42 policy-pap | [2024-02-29T23:14:45.402+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:42 kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.466733877Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:45.504+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:42 kafka | [2024-02-29 23:14:44,683] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.47501725Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.281713ms 23:16:42 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:42 kafka | [2024-02-29 23:14:44,686] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.478804938Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:14:45.515+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] (Re-)joining group 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.484701097Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.890528ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:45.516+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.489606336Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:14:45.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.489696166Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=90.36µs 23:16:42 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:42 policy-pap | [2024-02-29T23:14:45.601+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Request joining group due to: need to re-join with the given member-id: consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.492937419Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:42 policy-pap | [2024-02-29T23:14:45.602+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.493919009Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=981.15µs 23:16:42 policy-pap | [2024-02-29T23:14:45.602+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] (Re-)joining group 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.497342233Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:42 policy-pap | [2024-02-29T23:14:45.603+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.498488174Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.137061ms 23:16:42 policy-pap | [2024-02-29T23:14:45.604+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.504004509Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:42 policy-pap | [2024-02-29T23:14:45.604+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:42 kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.505138751Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.133932ms 23:16:42 policy-pap | [2024-02-29T23:14:48.627+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Successfully joined group with generation Generation{generationId=1, memberId='consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d', protocol='range'} 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.511693556Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:42 policy-pap | [2024-02-29T23:14:48.629+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da', protocol='range'} 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.511798757Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=106.431µs 23:16:42 policy-pap | [2024-02-29T23:14:48.637+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da=Assignment(partitions=[policy-pdp-pap-0])} 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.515389513Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:42 policy-pap | [2024-02-29T23:14:48.637+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Finished assignment for group at generation 1: {consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d=Assignment(partitions=[policy-pdp-pap-0])} 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.522212701Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.831418ms 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da', protocol='range'} 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Successfully synced group in generation Generation{generationId=1, memberId='consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d', protocol='range'} 23:16:42 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.527356152Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.541678935Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=14.311053ms 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.685+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:42 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.545725516Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:42 kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.690+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.552574674Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.858389ms 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.690+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Adding newly assigned partitions: policy-pdp-pap-0 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.55719351Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.711+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Found no committed offset for partition policy-pdp-pap-0 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.565018248Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.826338ms 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.717+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:42 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.568451752Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.736+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.574575883Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.123681ms 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:48.736+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:42 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.580061388Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:51.186+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.58022312Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=127.001µs 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:51.186+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.583764805Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:14:51.188+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.584617764Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=853.199µs 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.365+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.588094248Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:42 kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [] 23:16:42 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.594344081Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.249243ms 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.366+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.598190599Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.598356081Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=164.492µs 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.366+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.634600322Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.64542828Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.829488ms 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.377+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.648981286Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.472+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting 23:16:42 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.649824394Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=842.818µs 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.472+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting listener 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.653059656Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.472+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting timer 23:16:42 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.65942964Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.361514ms 23:16:42 kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.473+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.665355839Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.476+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting enqueue 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.666275288Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=919.159µs 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.476+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate started 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.670244638Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.476+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] 23:16:42 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.67146695Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.222232ms 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.478+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.687891354Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.696919224Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.02907ms 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.516+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.717429789Z level=info msg="Executing migration" id="create provenance_type table" 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.718875793Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.436874ms 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.516+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.728895503Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.517+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.730220086Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.317393ms 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.757976373Z level=info msg="Executing migration" id="create alert_image table" 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.761093434Z level=info msg="Migration successfully executed" id="create alert_image table" duration=3.118611ms 23:16:42 policy-pap | [2024-02-29T23:15:05.518+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:42 kafka | [2024-02-29 23:14:44,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.778353036Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:42 policy-pap | [2024-02-29T23:15:05.542+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 kafka | [2024-02-29 23:14:44,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.780302556Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.9489ms 23:16:42 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 kafka | [2024-02-29 23:14:44,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.808305465Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:42 policy-pap | [2024-02-29T23:15:05.546+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 kafka | [2024-02-29 23:14:44,693] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.808551238Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=245.963µs 23:16:42 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} 23:16:42 kafka | [2024-02-29 23:14:44,701] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:15:05.546+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.820065423Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.555+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.822302195Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=2.235642ms 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.843463226Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.8518814Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=8.196832ms 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping enqueue 23:16:42 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.862291194Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping timer 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.862840109Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] 23:16:42 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.871258173Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping listener 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.87187845Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=626.027µs 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopped 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.881254593Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.583+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.882550476Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.300163ms 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.887094391Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.584+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 161524c5-f252-4fdd-a0eb-d79ad94ffa8f 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.893253563Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.161142ms 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.587+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate successful 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.896752378Z level=info msg="Executing migration" id="create library_element table v1" 23:16:42 policy-pap | [2024-02-29T23:15:05.587+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 start publishing next request 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.897797928Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.05407ms 23:16:42 policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.904224042Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:42 policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting listener 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.905292173Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.063841ms 23:16:42 policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting timer 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.908429284Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:42 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:42 policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] 23:16:42 kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.909652066Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.221822ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting enqueue 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.912930859Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange started 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.91402759Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.094301ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.923943999Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:42 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:42 policy-pap | [2024-02-29T23:15:05.589+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.925659736Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.711227ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.929652486Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:42 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:42 policy-pap | [2024-02-29T23:15:05.602+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.929686186Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=28.79µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.933418413Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:15:05.602+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.933493124Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=70.991µs 23:16:42 policy-db-migrator | msg 23:16:42 policy-pap | [2024-02-29T23:15:05.618+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.942679156Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:42 policy-db-migrator | upgrade to 1100 completed 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.943020409Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=336.033µs 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:15:05.619+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.949035969Z level=info msg="Executing migration" id="create data_keys table" 23:16:42 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:42 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.950201811Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.171472ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:15:05.620+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6f96306c-4911-4d0f-b1c5-6fbfc3da40bc 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.95718103Z level=info msg="Executing migration" id="create secrets table" 23:16:42 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:42 policy-pap | [2024-02-29T23:15:05.620+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:42 kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.958380292Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.199462ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:15:05.623+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:05.965033989Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.010732635Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=45.700126ms 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.016037358Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping enqueue 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.022937424Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.900366ms 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping timer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.062792677Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:42 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.062986029Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=192.122µs 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping listener 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.067394235Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:42 policy-db-migrator | 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopped 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.114437756Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=47.042831ms 23:16:42 policy-db-migrator | -------------- 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange successful 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.118061095Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:42 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 start publishing next request 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.16432068Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=46.261555ms 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.169285451Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting listener 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.170193928Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=912.487µs 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting timer 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.174874886Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:42 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=68e6fc14-216e-4b7d-9108-21d5680aedaa, expireMs=1709248535624] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.175642532Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=769.356µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting enqueue 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.178646287Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate started 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.178809528Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=163.311µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.625+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.181861063Z level=info msg="Executing migration" id="create permission table" 23:16:42 policy-db-migrator | 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.182479108Z level=info msg="Migration successfully executed" id="create permission table" duration=619.465µs 23:16:42 policy-db-migrator | -------------- 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.639+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.1913802Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:42 kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.192155166Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=774.656µs 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.194981189Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.641+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.195748385Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=767.916µs 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:42 policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.198981611Z level=info msg="Executing migration" id="create role table" 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.641+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.199559066Z level=info msg="Migration successfully executed" id="create role table" duration=576.845µs 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.653+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:42 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.204855679Z level=info msg="Executing migration" id="add column display_name" 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:42 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.210404184Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.548295ms 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.213766031Z level=info msg="Executing migration" id="add column group_name" 23:16:42 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.220852499Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.085398ms 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 68e6fc14-216e-4b7d-9108-21d5680aedaa 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:42 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.224021954Z level=info msg="Executing migration" id="add index role.org_id" 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping 23:16:42 kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.225004742Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=982.288µs 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping enqueue 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.230318925Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.231411864Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.092439ms 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping timer 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:42 policy-db-migrator | TRUNCATE TABLE sequence 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.234851532Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=68e6fc14-216e-4b7d-9108-21d5680aedaa, expireMs=1709248535624] 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.235963531Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.111369ms 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping listener 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.239671941Z level=info msg="Executing migration" id="create team role table" 23:16:42 policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopped 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.24072531Z level=info msg="Migration successfully executed" id="create team role table" duration=1.052959ms 23:16:42 policy-pap | [2024-02-29T23:15:05.660+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate successful 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.245961872Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:42 policy-pap | [2024-02-29T23:15:05.660+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 has no more requests 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.247690536Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.711924ms 23:16:42 policy-pap | [2024-02-29T23:15:11.854+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:42 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.25184377Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:42 policy-pap | [2024-02-29T23:15:11.862+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.253628994Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.776804ms 23:16:42 policy-pap | [2024-02-29T23:15:12.298+00:00|INFO|SessionData|http-nio-6969-exec-5] unknown group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.257043082Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:42 policy-pap | [2024-02-29T23:15:12.849+00:00|INFO|SessionData|http-nio-6969-exec-5] create cached group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.258146801Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.103069ms 23:16:42 policy-pap | [2024-02-29T23:15:12.849+00:00|INFO|SessionData|http-nio-6969-exec-5] creating DB group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:42 policy-db-migrator | DROP TABLE pdpstatistics 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.263472814Z level=info msg="Executing migration" id="create user role table" 23:16:42 policy-pap | [2024-02-29T23:15:13.414+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.264653234Z level=info msg="Migration successfully executed" id="create user role table" duration=1.18314ms 23:16:42 policy-pap | [2024-02-29T23:15:13.704+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.268179882Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:42 policy-pap | [2024-02-29T23:15:13.806+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.269788655Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.603473ms 23:16:42 policy-pap | [2024-02-29T23:15:13.807+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.273220413Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:42 policy-pap | [2024-02-29T23:15:13.808+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.274339662Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.118689ms 23:16:42 policy-pap | [2024-02-29T23:15:13.822+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-29T23:15:13Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-29T23:15:13Z, user=policyadmin)] 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:42 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.2789394Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:42 policy-pap | [2024-02-29T23:15:14.527+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.280035868Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.096038ms 23:16:42 policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.284506475Z level=info msg="Executing migration" id="create builtin role table" 23:16:42 policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.285301021Z level=info msg="Migration successfully executed" id="create builtin role table" duration=793.996µs 23:16:42 policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:42 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.289933059Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:42 policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.291552032Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.612433ms 23:16:42 policy-pap | [2024-02-29T23:15:14.543+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-29T23:15:14Z, user=policyadmin)] 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:42 policy-db-migrator | DROP TABLE statistics_sequence 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.296616903Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:42 policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:42 policy-db-migrator | -------------- 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.298282906Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.665543ms 23:16:42 policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:42 policy-db-migrator | 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.301450852Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:42 policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:42 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.309097884Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.646642ms 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:42 policy-db-migrator | name version 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.312644913Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:42 policy-db-migrator | policyadmin 1300 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.313707251Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.061898ms 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:42 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.31849407Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:14.955+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-29T23:15:14Z, user=policyadmin)] 23:16:42 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.319567739Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.069239ms 23:16:42 policy-pap | [2024-02-29T23:15:35.474+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:42 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.322795325Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:42 policy-pap | [2024-02-29T23:15:35.547+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:42 policy-pap | [2024-02-29T23:15:35.550+00:00|INFO|SessionData|http-nio-6969-exec-10] deleting DB group testGroup 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:42 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 policy-pap | [2024-02-29T23:15:35.588+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:42 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:42 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.323890564Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.097409ms 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:42 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.327368642Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:42 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:42 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.328724503Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.356461ms 23:16:42 kafka | [2024-02-29 23:14:44,745] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:42 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.333113049Z level=info msg="Executing migration" id="create seed assignment table" 23:16:42 kafka | [2024-02-29 23:14:44,746] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:42 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.333896005Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=782.236µs 23:16:42 kafka | [2024-02-29 23:14:44,794] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.337997038Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:42 kafka | [2024-02-29 23:14:44,805] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.339138558Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.14064ms 23:16:42 kafka | [2024-02-29 23:14:44,807] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.342956868Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:42 kafka | [2024-02-29 23:14:44,808] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.350706511Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.749453ms 23:16:42 kafka | [2024-02-29 23:14:44,810] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,825] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.355011266Z level=info msg="Executing migration" id="permission kind migration" 23:16:42 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,826] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.362762929Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.746783ms 23:16:42 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,826] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.366252807Z level=info msg="Executing migration" id="permission attribute migration" 23:16:42 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,826] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.374613755Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.357328ms 23:16:42 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,827] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.378627718Z level=info msg="Executing migration" id="permission identifier migration" 23:16:42 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,834] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.386896995Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.268757ms 23:16:42 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,835] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.391119359Z level=info msg="Executing migration" id="add permission identifier index" 23:16:42 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,835] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.391884885Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=764.966µs 23:16:42 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,835] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.395200322Z level=info msg="Executing migration" id="create query_history table v1" 23:16:42 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,835] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.395896758Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=695.926µs 23:16:42 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 23:16:42 kafka | [2024-02-29 23:14:44,842] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.399388226Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:42 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 kafka | [2024-02-29 23:14:44,842] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.40118333Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.794084ms 23:16:42 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 kafka | [2024-02-29 23:14:44,842] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.406337602Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:42 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 kafka | [2024-02-29 23:14:44,842] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.406404403Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=73.851µs 23:16:42 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 kafka | [2024-02-29 23:14:44,842] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.410323355Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:42 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.410381395Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=56.68µs 23:16:42 kafka | [2024-02-29 23:14:44,850] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.415414836Z level=info msg="Executing migration" id="teams permissions migration" 23:16:42 kafka | [2024-02-29 23:14:44,851] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.416228972Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=814.466µs 23:16:42 kafka | [2024-02-29 23:14:44,851] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.420726029Z level=info msg="Executing migration" id="dashboard permissions" 23:16:42 kafka | [2024-02-29 23:14:44,851] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.421652996Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=927.927µs 23:16:42 kafka | [2024-02-29 23:14:44,852] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.426235124Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:42 kafka | [2024-02-29 23:14:44,865] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.426854239Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=619.175µs 23:16:42 kafka | [2024-02-29 23:14:44,866] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.430641649Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:42 kafka | [2024-02-29 23:14:44,866] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.430839391Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=197.882µs 23:16:42 kafka | [2024-02-29 23:14:44,866] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.485142311Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:42 kafka | [2024-02-29 23:14:44,866] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.485857227Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=715.676µs 23:16:42 kafka | [2024-02-29 23:14:44,875] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.492839543Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:42 kafka | [2024-02-29 23:14:44,875] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.493797661Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=960.268µs 23:16:42 kafka | [2024-02-29 23:14:44,875] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.497916865Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:42 kafka | [2024-02-29 23:14:44,875] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.499693529Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.776174ms 23:16:42 kafka | [2024-02-29 23:14:44,875] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.505097403Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:42 kafka | [2024-02-29 23:14:44,886] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.513795243Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.68053ms 23:16:42 kafka | [2024-02-29 23:14:44,887] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.518904075Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:42 kafka | [2024-02-29 23:14:44,887] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.518979705Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=75.39µs 23:16:42 kafka | [2024-02-29 23:14:44,887] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.5245442Z level=info msg="Executing migration" id="create correlation table v1" 23:16:42 kafka | [2024-02-29 23:14:44,887] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.526076233Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.537993ms 23:16:42 kafka | [2024-02-29 23:14:44,898] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.531335846Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:42 kafka | [2024-02-29 23:14:44,899] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.533564754Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.228478ms 23:16:42 kafka | [2024-02-29 23:14:44,899] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.538959938Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:42 kafka | [2024-02-29 23:14:44,899] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.540140938Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.17795ms 23:16:42 kafka | [2024-02-29 23:14:44,899] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.545622213Z level=info msg="Executing migration" id="add correlation config column" 23:16:42 kafka | [2024-02-29 23:14:44,907] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.553883401Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.260718ms 23:16:42 kafka | [2024-02-29 23:14:44,908] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.558241617Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:42 kafka | [2024-02-29 23:14:44,908] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.559343346Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.101689ms 23:16:42 kafka | [2024-02-29 23:14:44,908] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.563903323Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:42 kafka | [2024-02-29 23:14:44,908] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.565064442Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.161229ms 23:16:42 kafka | [2024-02-29 23:14:44,914] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.571308873Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:42 kafka | [2024-02-29 23:14:44,914] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.603667995Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=32.354562ms 23:16:42 kafka | [2024-02-29 23:14:44,914] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.607439496Z level=info msg="Executing migration" id="create correlation v2" 23:16:42 kafka | [2024-02-29 23:14:44,914] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.608392064Z level=info msg="Migration successfully executed" id="create correlation v2" duration=952.337µs 23:16:42 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.613313383Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:42 kafka | [2024-02-29 23:14:44,914] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.614512843Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.19932ms 23:16:42 kafka | [2024-02-29 23:14:44,920] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.618602376Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:42 kafka | [2024-02-29 23:14:44,921] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.619472373Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=869.947µs 23:16:42 kafka | [2024-02-29 23:14:44,921] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.624218072Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:42 kafka | [2024-02-29 23:14:44,921] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.625088959Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=870.087µs 23:16:42 kafka | [2024-02-29 23:14:44,921] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.631536991Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:42 kafka | [2024-02-29 23:14:44,931] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.631933824Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=396.863µs 23:16:42 kafka | [2024-02-29 23:14:44,932] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.637986573Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:42 kafka | [2024-02-29 23:14:44,932] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.638727739Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=740.406µs 23:16:42 kafka | [2024-02-29 23:14:44,932] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.642053926Z level=info msg="Executing migration" id="add provisioning column" 23:16:42 kafka | [2024-02-29 23:14:44,932] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.648114075Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.055669ms 23:16:42 kafka | [2024-02-29 23:14:44,941] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.65362601Z level=info msg="Executing migration" id="create entity_events table" 23:16:42 kafka | [2024-02-29 23:14:44,941] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.654455157Z level=info msg="Migration successfully executed" id="create entity_events table" duration=828.777µs 23:16:42 kafka | [2024-02-29 23:14:44,942] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.657699413Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:42 kafka | [2024-02-29 23:14:44,942] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.658642651Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=942.928µs 23:16:42 kafka | [2024-02-29 23:14:44,942] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 kafka | [2024-02-29 23:14:44,952] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.666579105Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:42 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.667108139Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:42 kafka | [2024-02-29 23:14:44,953] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.670544647Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:42 kafka | [2024-02-29 23:14:44,953] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.671006261Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:42 kafka | [2024-02-29 23:14:44,953] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.674387908Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:42 kafka | [2024-02-29 23:14:44,953] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.675234615Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=846.427µs 23:16:42 kafka | [2024-02-29 23:14:44,968] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.685022895Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:42 kafka | [2024-02-29 23:14:44,969] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.687092991Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.073926ms 23:16:42 kafka | [2024-02-29 23:14:44,969] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.697473586Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:42 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 kafka | [2024-02-29 23:14:44,969] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.699362441Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.888735ms 23:16:42 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.703506245Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:42 kafka | [2024-02-29 23:14:44,970] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 23:16:42 kafka | [2024-02-29 23:14:44,977] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.70539654Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.894436ms 23:16:42 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 23:16:42 kafka | [2024-02-29 23:14:44,978] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.709268371Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:42 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 23:16:42 kafka | [2024-02-29 23:14:44,978] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.71031549Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.046899ms 23:16:42 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 23:16:42 kafka | [2024-02-29 23:14:44,978] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.714649185Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:42 kafka | [2024-02-29 23:14:44,978] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.716238508Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.588813ms 23:16:42 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 23:16:42 kafka | [2024-02-29 23:14:44,985] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.722497628Z level=info msg="Executing migration" id="Drop public config table" 23:16:42 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 23:16:42 kafka | [2024-02-29 23:14:44,986] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.724495695Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.998557ms 23:16:42 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 23:16:42 kafka | [2024-02-29 23:14:44,986] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.731692423Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:42 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 23:16:42 kafka | [2024-02-29 23:14:44,988] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.733508928Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.819555ms 23:16:42 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.738559719Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:42 kafka | [2024-02-29 23:14:44,988] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.740318113Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.759894ms 23:16:42 kafka | [2024-02-29 23:14:45,071] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.743762011Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:42 kafka | [2024-02-29 23:14:45,072] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.744577597Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=815.236µs 23:16:42 kafka | [2024-02-29 23:14:45,072] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.75101314Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:42 kafka | [2024-02-29 23:14:45,073] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.751835376Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=822.006µs 23:16:42 kafka | [2024-02-29 23:14:45,073] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.756252832Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:42 kafka | [2024-02-29 23:14:45,082] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.786132634Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=29.879212ms 23:16:42 kafka | [2024-02-29 23:14:45,082] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.792422245Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:42 kafka | [2024-02-29 23:14:45,082] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.799553953Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.131598ms 23:16:42 kafka | [2024-02-29 23:14:45,082] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:16 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.806046676Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:42 kafka | [2024-02-29 23:14:45,082] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:16 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.815079139Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.035543ms 23:16:42 kafka | [2024-02-29 23:14:45,089] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:16 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.818787899Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:42 kafka | [2024-02-29 23:14:45,090] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.818982791Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=195.442µs 23:16:42 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,090] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.829224434Z level=info msg="Executing migration" id="add share column" 23:16:42 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,090] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.83740259Z level=info msg="Migration successfully executed" id="add share column" duration=8.176106ms 23:16:42 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,090] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,098] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.841451153Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:42 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,099] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.841692575Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=239.592µs 23:16:42 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,099] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.845168203Z level=info msg="Executing migration" id="create file table" 23:16:42 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,099] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.845811838Z level=info msg="Migration successfully executed" id="create file table" duration=643.095µs 23:16:42 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,099] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.849177605Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:42 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,107] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.850291114Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.112519ms 23:16:42 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2902242314101100u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,107] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.854784511Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:42 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,107] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.85587486Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.089539ms 23:16:42 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,107] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.861446275Z level=info msg="Executing migration" id="create file_meta table" 23:16:42 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,107] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.862743255Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.3025ms 23:16:42 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,115] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.86825456Z level=info msg="Executing migration" id="file table idx: path key" 23:16:42 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2902242314101300u 1 2024-02-29 23:14:16 23:16:42 kafka | [2024-02-29 23:14:45,115] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.869685972Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.430672ms 23:16:42 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2902242314101300u 1 2024-02-29 23:14:17 23:16:42 kafka | [2024-02-29 23:14:45,116] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.876485627Z level=info msg="Executing migration" id="set path collation in file table" 23:16:42 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2902242314101300u 1 2024-02-29 23:14:17 23:16:42 kafka | [2024-02-29 23:14:45,116] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.876533527Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=48µs 23:16:42 policy-db-migrator | policyadmin: OK @ 1300 23:16:42 kafka | [2024-02-29 23:14:45,116] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.92626292Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:42 kafka | [2024-02-29 23:14:45,125] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.926374811Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=121.691µs 23:16:42 kafka | [2024-02-29 23:14:45,126] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.931037189Z level=info msg="Executing migration" id="managed permissions migration" 23:16:42 kafka | [2024-02-29 23:14:45,126] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.931988967Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=952.118µs 23:16:42 kafka | [2024-02-29 23:14:45,126] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.940479255Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.940740058Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=260.943µs 23:16:42 kafka | [2024-02-29 23:14:45,126] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.953134628Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:42 kafka | [2024-02-29 23:14:45,137] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.954357308Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.22259ms 23:16:42 kafka | [2024-02-29 23:14:45,137] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.959938543Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:42 kafka | [2024-02-29 23:14:45,137] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.969575611Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.637708ms 23:16:42 kafka | [2024-02-29 23:14:45,137] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.973224521Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:42 kafka | [2024-02-29 23:14:45,137] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.973383772Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=158.181µs 23:16:42 kafka | [2024-02-29 23:14:45,150] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.97678679Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:42 kafka | [2024-02-29 23:14:45,151] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.977793768Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.006558ms 23:16:42 kafka | [2024-02-29 23:14:45,151] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.983190692Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:42 kafka | [2024-02-29 23:14:45,151] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.983608735Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=422.073µs 23:16:42 kafka | [2024-02-29 23:14:45,151] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.987614558Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:42 kafka | [2024-02-29 23:14:45,157] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.987838899Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=221.311µs 23:16:42 kafka | [2024-02-29 23:14:45,159] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.993998279Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:42 kafka | [2024-02-29 23:14:45,159] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.994764375Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=771.216µs 23:16:42 kafka | [2024-02-29 23:14:45,159] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:06.998335394Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:42 kafka | [2024-02-29 23:14:45,159] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.005540543Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=7.203759ms 23:16:42 kafka | [2024-02-29 23:14:45,167] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.009462236Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:42 kafka | [2024-02-29 23:14:45,167] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.017444353Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.986727ms 23:16:42 kafka | [2024-02-29 23:14:45,168] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.021974051Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:42 kafka | [2024-02-29 23:14:45,169] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.022942099Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=969.438µs 23:16:42 kafka | [2024-02-29 23:14:45,169] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.026753001Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:42 kafka | [2024-02-29 23:14:45,182] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.134335783Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=107.584562ms 23:16:42 kafka | [2024-02-29 23:14:45,183] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.138148715Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:42 kafka | [2024-02-29 23:14:45,183] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.139054693Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=906.658µs 23:16:42 kafka | [2024-02-29 23:14:45,183] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.143783202Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:42 kafka | [2024-02-29 23:14:45,183] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.14463386Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=849.318µs 23:16:42 kafka | [2024-02-29 23:14:45,196] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.148092069Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:42 kafka | [2024-02-29 23:14:45,198] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.184491914Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=36.399255ms 23:16:42 kafka | [2024-02-29 23:14:45,198] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.189789958Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:42 kafka | [2024-02-29 23:14:45,198] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.19000521Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=216.592µs 23:16:42 kafka | [2024-02-29 23:14:45,198] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.19353829Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:42 kafka | [2024-02-29 23:14:45,207] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,207] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.193719892Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=181.802µs 23:16:42 kafka | [2024-02-29 23:14:45,207] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.197563534Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:42 kafka | [2024-02-29 23:14:45,208] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.197796426Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=230.752µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.201800359Z level=info msg="Executing migration" id="create folder table" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.202639236Z level=info msg="Migration successfully executed" id="create folder table" duration=838.627µs 23:16:42 kafka | [2024-02-29 23:14:45,208] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.207916161Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:42 kafka | [2024-02-29 23:14:45,215] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.209868387Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.946456ms 23:16:42 kafka | [2024-02-29 23:14:45,216] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.214075043Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:42 kafka | [2024-02-29 23:14:45,216] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.215251222Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.1757ms 23:16:42 kafka | [2024-02-29 23:14:45,216] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.221476814Z level=info msg="Executing migration" id="Update folder title length" 23:16:42 kafka | [2024-02-29 23:14:45,216] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.221525595Z level=info msg="Migration successfully executed" id="Update folder title length" duration=49.921µs 23:16:42 kafka | [2024-02-29 23:14:45,224] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.2257169Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:42 kafka | [2024-02-29 23:14:45,225] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.227609226Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.891476ms 23:16:42 kafka | [2024-02-29 23:14:45,225] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.231554149Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:42 kafka | [2024-02-29 23:14:45,225] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.232623318Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.070889ms 23:16:42 kafka | [2024-02-29 23:14:45,226] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.237822352Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:42 kafka | [2024-02-29 23:14:45,234] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.240459224Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.627312ms 23:16:42 kafka | [2024-02-29 23:14:45,234] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.24479022Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:42 kafka | [2024-02-29 23:14:45,234] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.245492946Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=711.096µs 23:16:42 kafka | [2024-02-29 23:14:45,235] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.249194547Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:42 kafka | [2024-02-29 23:14:45,235] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(j4DaYO3UQ1iVwjuKp7Abhw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.249452909Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=258.382µs 23:16:42 kafka | [2024-02-29 23:14:45,242] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.25312204Z level=info msg="Executing migration" id="create anon_device table" 23:16:42 kafka | [2024-02-29 23:14:45,242] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.254085478Z level=info msg="Migration successfully executed" id="create anon_device table" duration=961.718µs 23:16:42 kafka | [2024-02-29 23:14:45,242] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.259358942Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:42 kafka | [2024-02-29 23:14:45,242] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.261245208Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.893036ms 23:16:42 kafka | [2024-02-29 23:14:45,243] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.267160828Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:42 kafka | [2024-02-29 23:14:45,249] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.268542679Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.374671ms 23:16:42 kafka | [2024-02-29 23:14:45,249] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.285813124Z level=info msg="Executing migration" id="create signing_key table" 23:16:42 kafka | [2024-02-29 23:14:45,249] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.287030055Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.224601ms 23:16:42 kafka | [2024-02-29 23:14:45,250] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.290584734Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:42 kafka | [2024-02-29 23:14:45,250] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.291858025Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.273181ms 23:16:42 kafka | [2024-02-29 23:14:45,257] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.330284558Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:42 kafka | [2024-02-29 23:14:45,262] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.332181004Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.896816ms 23:16:42 kafka | [2024-02-29 23:14:45,262] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.34012563Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:42 kafka | [2024-02-29 23:14:45,262] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.340512243Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=386.733µs 23:16:42 kafka | [2024-02-29 23:14:45,262] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.344280165Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:42 kafka | [2024-02-29 23:14:45,271] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.359858716Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=15.579341ms 23:16:42 kafka | [2024-02-29 23:14:45,272] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.363194964Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:42 kafka | [2024-02-29 23:14:45,272] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.363808059Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=616.255µs 23:16:42 kafka | [2024-02-29 23:14:45,273] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.368092615Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.369030943Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=938.218µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.375102504Z level=info msg="Executing migration" id="create sso_setting table" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.376829128Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.725484ms 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.382715688Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.383593775Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=878.828µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.387963502Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.388255784Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=292.422µs 23:16:42 grafana | logger=migrator t=2024-02-29T23:14:07.394378696Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.767963174s 23:16:42 grafana | logger=sqlstore t=2024-02-29T23:14:07.407765778Z level=info msg="Created default admin" user=admin 23:16:42 grafana | logger=sqlstore t=2024-02-29T23:14:07.408286182Z level=info msg="Created default organization" 23:16:42 grafana | logger=secrets t=2024-02-29T23:14:07.41272415Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:42 grafana | logger=plugin.store t=2024-02-29T23:14:07.428928345Z level=info msg="Loading plugins..." 23:16:42 grafana | logger=local.finder t=2024-02-29T23:14:07.467088936Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:42 grafana | logger=plugin.store t=2024-02-29T23:14:07.467166116Z level=info msg="Plugins loaded" count=55 duration=38.239841ms 23:16:42 grafana | logger=query_data t=2024-02-29T23:14:07.469914939Z level=info msg="Query Service initialization" 23:16:42 grafana | logger=live.push_http t=2024-02-29T23:14:07.474202906Z level=info msg="Live Push Gateway initialization" 23:16:42 grafana | logger=ngalert.migration t=2024-02-29T23:14:07.481160854Z level=info msg=Starting 23:16:42 grafana | logger=ngalert.migration orgID=1 t=2024-02-29T23:14:07.482100072Z level=info msg="Migrating alerts for organisation" 23:16:42 grafana | logger=ngalert.migration orgID=1 t=2024-02-29T23:14:07.482873438Z level=info msg="Alerts found to migrate" alerts=0 23:16:42 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-29T23:14:07.484809024Z level=info msg="Completed legacy migration" 23:16:42 grafana | logger=infra.usagestats.collector t=2024-02-29T23:14:07.515487072Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:42 grafana | logger=provisioning.datasources t=2024-02-29T23:14:07.517522329Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:42 grafana | logger=provisioning.alerting t=2024-02-29T23:14:07.53072671Z level=info msg="starting to provision alerting" 23:16:42 grafana | logger=provisioning.alerting t=2024-02-29T23:14:07.53074685Z level=info msg="finished to provision alerting" 23:16:42 grafana | logger=grafanaStorageLogger t=2024-02-29T23:14:07.531205264Z level=info msg="Storage starting" 23:16:42 grafana | logger=ngalert.state.manager t=2024-02-29T23:14:07.533208311Z level=info msg="Warming state cache for startup" 23:16:42 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-29T23:14:07.534524352Z level=info msg="Starting MultiOrg Alertmanager" 23:16:42 grafana | logger=http.server t=2024-02-29T23:14:07.545510564Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:42 grafana | logger=grafana-apiserver t=2024-02-29T23:14:07.547684332Z level=info msg="Authentication is disabled" 23:16:42 grafana | logger=grafana-apiserver t=2024-02-29T23:14:07.556521176Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:42 grafana | logger=plugins.update.checker t=2024-02-29T23:14:07.646214929Z level=info msg="Update check succeeded" duration=115.008265ms 23:16:42 grafana | logger=ngalert.state.manager t=2024-02-29T23:14:07.664322091Z level=info msg="State cache has been initialized" states=0 duration=131.10945ms 23:16:42 grafana | logger=ngalert.scheduler t=2024-02-29T23:14:07.664434882Z level=info msg="Starting scheduler" tickInterval=10s 23:16:42 grafana | logger=ticker t=2024-02-29T23:14:07.664806355Z level=info msg=starting first_tick=2024-02-29T23:14:10Z 23:16:42 grafana | logger=grafana.update.checker t=2024-02-29T23:14:07.740094687Z level=info msg="Update check succeeded" duration=208.605311ms 23:16:42 grafana | logger=sqlstore.transactions t=2024-02-29T23:14:07.767387336Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:42 grafana | logger=sqlstore.transactions t=2024-02-29T23:14:07.778271177Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:42 grafana | logger=infra.usagestats t=2024-02-29T23:15:58.543605948Z level=info msg="Usage stats are ready to report" 23:16:42 kafka | [2024-02-29 23:14:45,273] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,285] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,288] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,289] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,289] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,289] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,302] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,303] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,303] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,303] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,304] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,313] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,314] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,314] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,314] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,314] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,324] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,324] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,325] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,325] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,325] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,332] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,333] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,333] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,333] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,333] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,340] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,340] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,340] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,340] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,340] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,349] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,350] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,350] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,350] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,351] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,358] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,358] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,358] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,358] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,359] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,366] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,366] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,366] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,366] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,366] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,374] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,379] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,379] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,380] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,380] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,390] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,391] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,393] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,393] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,393] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,403] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:42 kafka | [2024-02-29 23:14:45,404] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:42 kafka | [2024-02-29 23:14:45,404] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,404] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:42 kafka | [2024-02-29 23:14:45,404] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,415] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,415] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,415] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,427] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,440] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 12 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:42 kafka | [2024-02-29 23:14:45,443] INFO [Broker id=1] Finished LeaderAndIsr request in 744ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,468] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Fk26_aqxRF-nlCfGN2xAXQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=j4DaYO3UQ1iVwjuKp7Abhw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,477] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,478] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:42 kafka | [2024-02-29 23:14:45,593] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,593] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group ee5900cb-eee5-431a-a953-12f2e7174bf4 in Empty state. Created a new member id consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,613] INFO [GroupCoordinator 1]: Preparing to rebalance group ee5900cb-eee5-431a-a953-12f2e7174bf4 in state PreparingRebalance with old generation 0 (__consumer_offsets-17) (reason: Adding new member consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,619] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,752] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f in Empty state. Created a new member id consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:45,756] INFO [GroupCoordinator 1]: Preparing to rebalance group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f in state PreparingRebalance with old generation 0 (__consumer_offsets-43) (reason: Adding new member consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:48,623] INFO [GroupCoordinator 1]: Stabilized group ee5900cb-eee5-431a-a953-12f2e7174bf4 generation 1 (__consumer_offsets-17) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:48,627] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:48,659] INFO [GroupCoordinator 1]: Assignment received from leader consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d for group ee5900cb-eee5-431a-a953-12f2e7174bf4 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:48,659] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:48,758] INFO [GroupCoordinator 1]: Stabilized group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f generation 1 (__consumer_offsets-43) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:42 kafka | [2024-02-29 23:14:48,776] INFO [GroupCoordinator 1]: Assignment received from leader consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 for group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:42 ++ echo 'Tearing down containers...' 23:16:42 Tearing down containers... 23:16:42 ++ docker-compose down -v --remove-orphans 23:16:43 Stopping policy-apex-pdp ... 23:16:43 Stopping policy-pap ... 23:16:43 Stopping policy-api ... 23:16:43 Stopping kafka ... 23:16:43 Stopping grafana ... 23:16:43 Stopping simulator ... 23:16:43 Stopping mariadb ... 23:16:43 Stopping compose_zookeeper_1 ... 23:16:43 Stopping prometheus ... 23:16:44 Stopping grafana ... done 23:16:44 Stopping prometheus ... done 23:16:53 Stopping policy-apex-pdp ... done 23:17:04 Stopping simulator ... done 23:17:04 Stopping policy-pap ... done 23:17:05 Stopping mariadb ... done 23:17:05 Stopping kafka ... done 23:17:06 Stopping compose_zookeeper_1 ... done 23:17:14 Stopping policy-api ... done 23:17:14 Removing policy-apex-pdp ... 23:17:14 Removing policy-pap ... 23:17:14 Removing policy-api ... 23:17:14 Removing kafka ... 23:17:14 Removing policy-db-migrator ... 23:17:14 Removing grafana ... 23:17:14 Removing simulator ... 23:17:14 Removing mariadb ... 23:17:14 Removing compose_zookeeper_1 ... 23:17:14 Removing prometheus ... 23:17:14 Removing grafana ... done 23:17:14 Removing kafka ... done 23:17:14 Removing policy-api ... done 23:17:14 Removing prometheus ... done 23:17:14 Removing simulator ... done 23:17:14 Removing policy-db-migrator ... done 23:17:14 Removing policy-apex-pdp ... done 23:17:14 Removing policy-pap ... done 23:17:14 Removing mariadb ... done 23:17:14 Removing compose_zookeeper_1 ... done 23:17:14 Removing network compose_default 23:17:14 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:14 + load_set 23:17:14 + _setopts=hxB 23:17:14 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:14 ++ tr : ' ' 23:17:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:14 + set +o braceexpand 23:17:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:14 + set +o hashall 23:17:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:14 + set +o interactive-comments 23:17:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:14 + set +o xtrace 23:17:14 ++ echo hxB 23:17:14 ++ sed 's/./& /g' 23:17:14 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:14 + set +h 23:17:14 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:14 + set +x 23:17:14 + [[ -n /tmp/tmp.yQgLqrqYzc ]] 23:17:14 + rsync -av /tmp/tmp.yQgLqrqYzc/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:14 sending incremental file list 23:17:14 ./ 23:17:14 log.html 23:17:14 output.xml 23:17:14 report.html 23:17:14 testplan.txt 23:17:14 23:17:14 sent 919,002 bytes received 95 bytes 1,838,194.00 bytes/sec 23:17:14 total size is 918,461 speedup is 1.00 23:17:14 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:15 + exit 0 23:17:15 $ ssh-agent -k 23:17:15 unset SSH_AUTH_SOCK; 23:17:15 unset SSH_AGENT_PID; 23:17:15 echo Agent pid 2079 killed; 23:17:15 [ssh-agent] Stopped. 23:17:15 Robot results publisher started... 23:17:15 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:15 -Parsing output xml: 23:17:15 Done! 23:17:15 WARNING! Could not find file: **/log.html 23:17:15 WARNING! Could not find file: **/report.html 23:17:15 -Copying log files to build dir: 23:17:15 Done! 23:17:15 -Assigning results to build: 23:17:15 Done! 23:17:15 -Checking thresholds: 23:17:15 Done! 23:17:15 Done publishing Robot results. 23:17:15 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:15 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13590451755714185785.sh 23:17:15 ---> sysstat.sh 23:17:16 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12809220214754524161.sh 23:17:16 ---> package-listing.sh 23:17:16 ++ facter osfamily 23:17:16 ++ tr '[:upper:]' '[:lower:]' 23:17:16 + OS_FAMILY=debian 23:17:16 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:16 + START_PACKAGES=/tmp/packages_start.txt 23:17:16 + END_PACKAGES=/tmp/packages_end.txt 23:17:16 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:16 + PACKAGES=/tmp/packages_start.txt 23:17:16 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:16 + PACKAGES=/tmp/packages_end.txt 23:17:16 + case "${OS_FAMILY}" in 23:17:16 + dpkg -l 23:17:16 + grep '^ii' 23:17:16 + '[' -f /tmp/packages_start.txt ']' 23:17:16 + '[' -f /tmp/packages_end.txt ']' 23:17:16 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:16 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:16 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:16 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:16 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7366362530703612521.sh 23:17:16 ---> capture-instance-metadata.sh 23:17:16 Setup pyenv: 23:17:16 system 23:17:16 3.8.13 23:17:16 3.9.13 23:17:16 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:16 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv 23:17:18 lf-activate-venv(): INFO: Installing: lftools 23:17:28 lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH 23:17:28 INFO: Running in OpenStack, capturing instance metadata 23:17:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12966616653946425579.sh 23:17:29 provisioning config files... 23:17:29 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config7726622470440274495tmp 23:17:29 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:29 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:29 [EnvInject] - Injecting environment variables from a build step. 23:17:29 [EnvInject] - Injecting as environment variables the properties content 23:17:29 SERVER_ID=logs 23:17:29 23:17:29 [EnvInject] - Variables injected successfully. 23:17:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9676243998789785870.sh 23:17:29 ---> create-netrc.sh 23:17:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17631643642979732378.sh 23:17:29 ---> python-tools-install.sh 23:17:29 Setup pyenv: 23:17:29 system 23:17:29 3.8.13 23:17:29 3.9.13 23:17:29 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:29 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv 23:17:30 lf-activate-venv(): INFO: Installing: lftools 23:17:38 lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH 23:17:38 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17945751632905612697.sh 23:17:38 ---> sudo-logs.sh 23:17:38 Archiving 'sudo' log.. 23:17:38 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins889751829826837534.sh 23:17:38 ---> job-cost.sh 23:17:38 Setup pyenv: 23:17:39 system 23:17:39 3.8.13 23:17:39 3.9.13 23:17:39 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:39 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv 23:17:40 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:45 lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH 23:17:45 INFO: No Stack... 23:17:46 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:46 INFO: Archiving Costs 23:17:46 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins6959643072135076841.sh 23:17:46 ---> logs-deploy.sh 23:17:46 Setup pyenv: 23:17:46 system 23:17:46 3.8.13 23:17:46 3.9.13 23:17:46 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:47 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv 23:17:48 lf-activate-venv(): INFO: Installing: lftools 23:17:56 lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH 23:17:56 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1595 23:17:56 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:57 Archives upload complete. 23:17:58 INFO: archiving logs to Nexus 23:17:58 ---> uname -a: 23:17:58 Linux prd-ubuntu1804-docker-8c-8g-9933 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:17:58 23:17:58 23:17:58 ---> lscpu: 23:17:58 Architecture: x86_64 23:17:58 CPU op-mode(s): 32-bit, 64-bit 23:17:58 Byte Order: Little Endian 23:17:58 CPU(s): 8 23:17:58 On-line CPU(s) list: 0-7 23:17:58 Thread(s) per core: 1 23:17:58 Core(s) per socket: 1 23:17:58 Socket(s): 8 23:17:58 NUMA node(s): 1 23:17:58 Vendor ID: AuthenticAMD 23:17:58 CPU family: 23 23:17:58 Model: 49 23:17:58 Model name: AMD EPYC-Rome Processor 23:17:58 Stepping: 0 23:17:58 CPU MHz: 2800.000 23:17:58 BogoMIPS: 5600.00 23:17:58 Virtualization: AMD-V 23:17:58 Hypervisor vendor: KVM 23:17:58 Virtualization type: full 23:17:58 L1d cache: 32K 23:17:58 L1i cache: 32K 23:17:58 L2 cache: 512K 23:17:58 L3 cache: 16384K 23:17:58 NUMA node0 CPU(s): 0-7 23:17:58 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:17:58 23:17:58 23:17:58 ---> nproc: 23:17:58 8 23:17:58 23:17:58 23:17:58 ---> df -h: 23:17:58 Filesystem Size Used Avail Use% Mounted on 23:17:58 udev 16G 0 16G 0% /dev 23:17:58 tmpfs 3.2G 708K 3.2G 1% /run 23:17:58 /dev/vda1 155G 14G 142G 9% / 23:17:58 tmpfs 16G 0 16G 0% /dev/shm 23:17:58 tmpfs 5.0M 0 5.0M 0% /run/lock 23:17:58 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:17:58 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:17:58 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:17:58 23:17:58 23:17:58 ---> free -m: 23:17:58 total used free shared buff/cache available 23:17:58 Mem: 32167 832 25127 0 6207 30879 23:17:58 Swap: 1023 0 1023 23:17:58 23:17:58 23:17:59 ---> ip addr: 23:17:59 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:17:59 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:17:59 inet 127.0.0.1/8 scope host lo 23:17:59 valid_lft forever preferred_lft forever 23:17:59 inet6 ::1/128 scope host 23:17:59 valid_lft forever preferred_lft forever 23:17:59 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:17:59 link/ether fa:16:3e:f4:f7:b5 brd ff:ff:ff:ff:ff:ff 23:17:59 inet 10.30.107.235/23 brd 10.30.107.255 scope global dynamic ens3 23:17:59 valid_lft 85943sec preferred_lft 85943sec 23:17:59 inet6 fe80::f816:3eff:fef4:f7b5/64 scope link 23:17:59 valid_lft forever preferred_lft forever 23:17:59 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:17:59 link/ether 02:42:c9:b1:98:ab brd ff:ff:ff:ff:ff:ff 23:17:59 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:17:59 valid_lft forever preferred_lft forever 23:17:59 23:17:59 23:17:59 ---> sar -b -r -n DEV: 23:17:59 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-9933) 02/29/24 _x86_64_ (8 CPU) 23:17:59 23:17:59 23:10:25 LINUX RESTART (8 CPU) 23:17:59 23:17:59 23:11:01 tps rtps wtps bread/s bwrtn/s 23:17:59 23:12:01 115.81 36.14 79.67 1687.84 25902.43 23:17:59 23:13:01 127.73 23.15 104.58 2767.54 31635.26 23:17:59 23:14:01 260.04 2.68 257.36 417.40 149330.71 23:17:59 23:15:01 287.37 9.88 277.49 402.53 28659.91 23:17:59 23:16:01 18.71 0.02 18.70 0.13 19628.10 23:17:59 23:17:01 28.27 0.07 28.21 10.53 21226.26 23:17:59 Average: 139.65 11.99 127.66 880.99 46062.53 23:17:59 23:17:59 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:17:59 23:12:01 30101880 31693852 2837340 8.61 70172 1832432 1433108 4.22 880844 1668688 157908 23:17:59 23:13:01 28594204 31664576 4345016 13.19 104804 3229348 1596872 4.70 991024 2967364 1217884 23:17:59 23:14:01 25582476 31472664 7356744 22.33 141536 5874976 4511916 13.28 1205708 5603564 500 23:17:59 23:15:01 23450660 29492500 9488560 28.81 156340 5993400 8906320 26.20 3378168 5506476 1396 23:17:59 23:16:01 23463856 29506528 9475364 28.77 156612 5993692 8867760 26.09 3365796 5504052 236 23:17:59 23:17:01 23716248 29784688 9222972 28.00 156948 6021868 7323488 21.55 3110336 5518468 336 23:17:59 Average: 25818221 30602468 7120999 21.62 131069 4824286 5439911 16.01 2155313 4461435 229710 23:17:59 23:17:59 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:17:59 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:12:01 ens3 63.06 42.45 872.93 9.36 0.00 0.00 0.00 0.00 23:17:59 23:12:01 lo 1.67 1.67 0.18 0.18 0.00 0.00 0.00 0.00 23:17:59 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:13:01 ens3 222.18 147.64 6440.86 15.16 0.00 0.00 0.00 0.00 23:17:59 23:13:01 lo 6.93 6.93 0.65 0.65 0.00 0.00 0.00 0.00 23:17:59 23:13:01 br-f795af09d20d 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:14:01 veth8264fa4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:14:01 veth35e9170 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:14:01 veth2a66cac 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:15:01 veth8264fa4 5.08 6.47 0.81 0.91 0.00 0.00 0.00 0.00 23:17:59 23:15:01 veth35e9170 0.00 0.42 0.00 0.02 0.00 0.00 0.00 0.00 23:17:59 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:15:01 veth175875d 0.52 0.90 0.06 0.31 0.00 0.00 0.00 0.00 23:17:59 23:16:01 veth8264fa4 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:17:59 23:16:01 veth35e9170 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:16:01 veth175875d 0.23 0.15 0.02 0.01 0.00 0.00 0.00 0.00 23:17:59 23:17:01 veth8264fa4 0.17 0.50 0.01 0.04 0.00 0.00 0.00 0.00 23:17:59 23:17:01 veth35e9170 0.00 0.15 0.00 0.01 0.00 0.00 0.00 0.00 23:17:59 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 23:17:01 ens3 1611.48 897.32 33899.60 132.11 0.00 0.00 0.00 0.00 23:17:59 Average: veth8264fa4 0.90 1.22 0.14 0.16 0.00 0.00 0.00 0.00 23:17:59 Average: veth35e9170 0.00 0.10 0.00 0.01 0.00 0.00 0.00 0.00 23:17:59 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:59 Average: ens3 215.83 114.86 5523.35 13.08 0.00 0.00 0.00 0.00 23:17:59 23:17:59 23:17:59 ---> sar -P ALL: 23:17:59 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-9933) 02/29/24 _x86_64_ (8 CPU) 23:17:59 23:17:59 23:10:25 LINUX RESTART (8 CPU) 23:17:59 23:17:59 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:17:59 23:12:01 all 10.16 0.00 0.82 2.31 0.04 86.67 23:17:59 23:12:01 0 11.08 0.00 0.93 0.43 0.02 87.54 23:17:59 23:12:01 1 0.97 0.00 0.50 13.46 0.03 85.03 23:17:59 23:12:01 2 1.08 0.00 0.35 0.03 0.02 98.52 23:17:59 23:12:01 3 2.54 0.00 0.57 1.45 0.02 95.43 23:17:59 23:12:01 4 26.46 0.00 1.49 0.87 0.05 71.14 23:17:59 23:12:01 5 19.75 0.00 1.32 0.65 0.03 78.25 23:17:59 23:12:01 6 16.09 0.00 0.98 0.68 0.05 82.19 23:17:59 23:12:01 7 3.37 0.00 0.40 0.97 0.07 95.19 23:17:59 23:13:01 all 11.30 0.00 2.08 2.56 0.04 84.03 23:17:59 23:13:01 0 5.79 0.00 1.94 0.22 0.03 92.02 23:17:59 23:13:01 1 4.42 0.00 1.56 8.65 0.02 85.36 23:17:59 23:13:01 2 14.64 0.00 1.74 1.21 0.03 82.38 23:17:59 23:13:01 3 4.25 0.00 1.65 3.42 0.03 90.64 23:17:59 23:13:01 4 19.28 0.00 2.77 1.53 0.03 76.40 23:17:59 23:13:01 5 10.46 0.00 2.14 0.42 0.03 86.94 23:17:59 23:13:01 6 24.64 0.00 2.67 0.87 0.05 71.77 23:17:59 23:13:01 7 6.88 0.00 2.16 4.16 0.07 86.73 23:17:59 23:14:01 all 10.70 0.00 5.18 8.25 0.06 75.81 23:17:59 23:14:01 0 9.80 0.00 5.73 0.71 0.07 83.69 23:17:59 23:14:01 1 12.90 0.00 5.70 18.01 0.09 63.30 23:17:59 23:14:01 2 13.39 0.00 4.09 0.27 0.05 82.19 23:17:59 23:14:01 3 10.49 0.00 4.79 11.05 0.07 73.61 23:17:59 23:14:01 4 8.74 0.00 5.69 10.99 0.07 74.52 23:17:59 23:14:01 5 11.84 0.00 5.43 14.02 0.07 68.65 23:17:59 23:14:01 6 9.21 0.00 5.22 3.75 0.05 81.77 23:17:59 23:14:01 7 9.23 0.00 4.77 7.41 0.05 78.54 23:17:59 23:15:01 all 29.86 0.00 4.01 2.24 0.09 63.80 23:17:59 23:15:01 0 26.73 0.00 3.91 1.70 0.08 67.58 23:17:59 23:15:01 1 29.37 0.00 4.13 1.55 0.10 64.85 23:17:59 23:15:01 2 30.43 0.00 4.13 6.48 0.08 58.87 23:17:59 23:15:01 3 29.34 0.00 3.76 0.69 0.08 66.14 23:17:59 23:15:01 4 30.79 0.00 4.09 2.04 0.10 62.98 23:17:59 23:15:01 5 40.86 0.00 4.93 0.59 0.10 53.52 23:17:59 23:15:01 6 29.23 0.00 4.15 1.92 0.07 64.64 23:17:59 23:15:01 7 22.20 0.00 3.05 2.92 0.08 71.74 23:17:59 23:16:01 all 4.67 0.00 0.44 0.95 0.06 93.89 23:17:59 23:16:01 0 3.99 0.00 0.48 0.00 0.03 95.50 23:17:59 23:16:01 1 5.06 0.00 0.40 0.03 0.05 94.45 23:17:59 23:16:01 2 5.05 0.00 0.40 7.44 0.05 87.06 23:17:59 23:16:01 3 3.85 0.00 0.44 0.03 0.05 95.63 23:17:59 23:16:01 4 4.26 0.00 0.45 0.00 0.07 95.23 23:17:59 23:16:01 5 6.06 0.00 0.65 0.00 0.07 93.22 23:17:59 23:16:01 6 5.95 0.00 0.45 0.05 0.07 93.48 23:17:59 23:16:01 7 3.08 0.00 0.22 0.02 0.07 96.62 23:17:59 23:17:01 all 1.31 0.00 0.37 1.21 0.05 97.06 23:17:59 23:17:01 0 1.19 0.00 0.37 0.00 0.05 98.40 23:17:59 23:17:01 1 1.22 0.00 0.40 0.17 0.05 98.16 23:17:59 23:17:01 2 1.97 0.00 0.45 8.98 0.07 88.53 23:17:59 23:17:01 3 0.89 0.00 0.37 0.05 0.05 98.64 23:17:59 23:17:01 4 1.52 0.00 0.30 0.37 0.05 97.76 23:17:59 23:17:01 5 1.00 0.00 0.47 0.02 0.08 98.43 23:17:59 23:17:01 6 0.87 0.00 0.22 0.02 0.02 98.88 23:17:59 23:17:01 7 1.85 0.00 0.38 0.08 0.07 97.61 23:17:59 Average: all 11.32 0.00 2.14 2.91 0.06 83.58 23:17:59 Average: 0 9.75 0.00 2.22 0.51 0.05 87.48 23:17:59 Average: 1 8.95 0.00 2.10 6.95 0.06 81.95 23:17:59 Average: 2 11.07 0.00 1.85 4.07 0.05 82.95 23:17:59 Average: 3 8.55 0.00 1.92 2.76 0.05 86.72 23:17:59 Average: 4 15.18 0.00 2.46 2.61 0.06 79.69 23:17:59 Average: 5 14.98 0.00 2.48 2.58 0.06 79.90 23:17:59 Average: 6 14.32 0.00 2.27 1.21 0.05 82.15 23:17:59 Average: 7 7.76 0.00 1.82 2.58 0.07 87.77 23:17:59 23:17:59 23:17:59