16:09:34 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137538 16:09:34 Running as SYSTEM 16:09:34 [EnvInject] - Loading node environment variables. 16:09:34 Building remotely on prd-ubuntu1804-docker-8c-8g-14655 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-verify-pap 16:09:34 [ssh-agent] Looking for ssh-agent implementation... 16:09:34 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 16:09:34 $ ssh-agent 16:09:34 SSH_AUTH_SOCK=/tmp/ssh-Y4zFaHkeyVID/agent.2099 16:09:34 SSH_AGENT_PID=2101 16:09:34 [ssh-agent] Started. 16:09:34 Running ssh-add (command line suppressed) 16:09:34 Identity added: /w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_3600161837914408748.key (/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_3600161837914408748.key) 16:09:34 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 16:09:34 The recommended git tool is: NONE 16:09:36 using credential onap-jenkins-ssh 16:09:36 Wiping out workspace first. 16:09:36 Cloning the remote Git repository 16:09:36 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 16:09:36 > git init /w/workspace/policy-pap-master-project-csit-verify-pap # timeout=10 16:09:36 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 16:09:36 > git --version # timeout=10 16:09:36 > git --version # 'git version 2.17.1' 16:09:36 using GIT_SSH to set credentials Gerrit user 16:09:36 Verifying host key using manually-configured host key entries 16:09:36 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 16:09:36 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 16:09:36 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 16:09:37 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 16:09:37 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 16:09:37 using GIT_SSH to set credentials Gerrit user 16:09:37 Verifying host key using manually-configured host key entries 16:09:37 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/38/137538/1 # timeout=30 16:09:37 > git rev-parse 6f8a8fdf2815ab5354b15c3fa6c076c09cf62b27^{commit} # timeout=10 16:09:37 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 16:09:37 Checking out Revision 6f8a8fdf2815ab5354b15c3fa6c076c09cf62b27 (refs/changes/38/137538/1) 16:09:37 > git config core.sparsecheckout # timeout=10 16:09:37 > git checkout -f 6f8a8fdf2815ab5354b15c3fa6c076c09cf62b27 # timeout=30 16:09:40 Commit message: "Fix jenkins merge job failure in policy-docker" 16:09:40 > git rev-parse FETCH_HEAD^{commit} # timeout=10 16:09:40 > git rev-list --no-walk 3e1c1491c4aa260fda04d13cd7ad97056e43c02a # timeout=10 16:09:40 provisioning config files... 16:09:40 copy managed file [npmrc] to file:/home/jenkins/.npmrc 16:09:40 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 16:09:40 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins16809607433560129566.sh 16:09:40 ---> python-tools-install.sh 16:09:40 Setup pyenv: 16:09:40 * system (set by /opt/pyenv/version) 16:09:40 * 3.8.13 (set by /opt/pyenv/version) 16:09:40 * 3.9.13 (set by /opt/pyenv/version) 16:09:40 * 3.10.6 (set by /opt/pyenv/version) 16:09:45 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-5zh5 16:09:45 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 16:09:48 lf-activate-venv(): INFO: Installing: lftools 16:10:22 lf-activate-venv(): INFO: Adding /tmp/venv-5zh5/bin to PATH 16:10:22 Generating Requirements File 16:10:49 Python 3.10.6 16:10:49 pip 24.0 from /tmp/venv-5zh5/lib/python3.10/site-packages/pip (python 3.10) 16:10:50 appdirs==1.4.4 16:10:50 argcomplete==3.2.3 16:10:50 aspy.yaml==1.3.0 16:10:50 attrs==23.2.0 16:10:50 autopage==0.5.2 16:10:50 beautifulsoup4==4.12.3 16:10:50 boto3==1.34.66 16:10:50 botocore==1.34.66 16:10:50 bs4==0.0.2 16:10:50 cachetools==5.3.3 16:10:50 certifi==2024.2.2 16:10:50 cffi==1.16.0 16:10:50 cfgv==3.4.0 16:10:50 chardet==5.2.0 16:10:50 charset-normalizer==3.3.2 16:10:50 click==8.1.7 16:10:50 cliff==4.6.0 16:10:50 cmd2==2.4.3 16:10:50 cryptography==3.3.2 16:10:50 debtcollector==3.0.0 16:10:50 decorator==5.1.1 16:10:50 defusedxml==0.7.1 16:10:50 Deprecated==1.2.14 16:10:50 distlib==0.3.8 16:10:50 dnspython==2.6.1 16:10:50 docker==4.2.2 16:10:50 dogpile.cache==1.3.2 16:10:50 email_validator==2.1.1 16:10:50 filelock==3.13.1 16:10:50 future==1.0.0 16:10:50 gitdb==4.0.11 16:10:50 GitPython==3.1.42 16:10:50 google-auth==2.28.2 16:10:50 httplib2==0.22.0 16:10:50 identify==2.5.35 16:10:50 idna==3.6 16:10:50 importlib-resources==1.5.0 16:10:50 iso8601==2.1.0 16:10:50 Jinja2==3.1.3 16:10:50 jmespath==1.0.1 16:10:50 jsonpatch==1.33 16:10:50 jsonpointer==2.4 16:10:50 jsonschema==4.21.1 16:10:50 jsonschema-specifications==2023.12.1 16:10:50 keystoneauth1==5.6.0 16:10:50 kubernetes==29.0.0 16:10:50 lftools==0.37.10 16:10:50 lxml==5.1.0 16:10:50 MarkupSafe==2.1.5 16:10:50 msgpack==1.0.8 16:10:50 multi_key_dict==2.0.3 16:10:50 munch==4.0.0 16:10:50 netaddr==1.2.1 16:10:50 netifaces==0.11.0 16:10:50 niet==1.4.2 16:10:50 nodeenv==1.8.0 16:10:50 oauth2client==4.1.3 16:10:50 oauthlib==3.2.2 16:10:50 openstacksdk==3.0.0 16:10:50 os-client-config==2.1.0 16:10:50 os-service-types==1.7.0 16:10:50 osc-lib==3.0.1 16:10:50 oslo.config==9.4.0 16:10:50 oslo.context==5.5.0 16:10:50 oslo.i18n==6.3.0 16:10:50 oslo.log==5.5.0 16:10:50 oslo.serialization==5.4.0 16:10:50 oslo.utils==7.1.0 16:10:50 packaging==24.0 16:10:50 pbr==6.0.0 16:10:50 platformdirs==4.2.0 16:10:50 prettytable==3.10.0 16:10:50 pyasn1==0.5.1 16:10:50 pyasn1-modules==0.3.0 16:10:50 pycparser==2.21 16:10:50 pygerrit2==2.0.15 16:10:50 PyGithub==2.2.0 16:10:50 pyinotify==0.9.6 16:10:50 PyJWT==2.8.0 16:10:50 PyNaCl==1.5.0 16:10:50 pyparsing==2.4.7 16:10:50 pyperclip==1.8.2 16:10:50 pyrsistent==0.20.0 16:10:50 python-cinderclient==9.5.0 16:10:50 python-dateutil==2.9.0.post0 16:10:50 python-heatclient==3.5.0 16:10:50 python-jenkins==1.8.2 16:10:50 python-keystoneclient==5.4.0 16:10:50 python-magnumclient==4.4.0 16:10:50 python-novaclient==18.6.0 16:10:50 python-openstackclient==6.6.0 16:10:50 python-swiftclient==4.5.0 16:10:50 PyYAML==6.0.1 16:10:50 referencing==0.34.0 16:10:50 requests==2.31.0 16:10:50 requests-oauthlib==1.4.0 16:10:50 requestsexceptions==1.4.0 16:10:50 rfc3986==2.0.0 16:10:50 rpds-py==0.18.0 16:10:50 rsa==4.9 16:10:50 ruamel.yaml==0.18.6 16:10:50 ruamel.yaml.clib==0.2.8 16:10:50 s3transfer==0.10.1 16:10:50 simplejson==3.19.2 16:10:50 six==1.16.0 16:10:50 smmap==5.0.1 16:10:50 soupsieve==2.5 16:10:50 stevedore==5.2.0 16:10:50 tabulate==0.9.0 16:10:50 toml==0.10.2 16:10:50 tomlkit==0.12.4 16:10:50 tqdm==4.66.2 16:10:50 typing_extensions==4.10.0 16:10:50 tzdata==2024.1 16:10:50 urllib3==1.26.18 16:10:50 virtualenv==20.25.1 16:10:50 wcwidth==0.2.13 16:10:50 websocket-client==1.7.0 16:10:50 wrapt==1.16.0 16:10:50 xdg==6.0.0 16:10:50 xmltodict==0.13.0 16:10:50 yq==3.2.3 16:10:50 [EnvInject] - Injecting environment variables from a build step. 16:10:50 [EnvInject] - Injecting as environment variables the properties content 16:10:50 SET_JDK_VERSION=openjdk17 16:10:50 GIT_URL="git://cloud.onap.org/mirror" 16:10:50 16:10:50 [EnvInject] - Variables injected successfully. 16:10:50 [policy-pap-master-project-csit-verify-pap] $ /bin/sh /tmp/jenkins4326716690572065455.sh 16:10:50 ---> update-java-alternatives.sh 16:10:50 ---> Updating Java version 16:10:50 ---> Ubuntu/Debian system detected 16:10:50 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 16:10:50 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 16:10:50 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 16:10:50 openjdk version "17.0.4" 2022-07-19 16:10:50 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 16:10:50 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 16:10:50 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 16:10:50 [EnvInject] - Injecting environment variables from a build step. 16:10:50 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 16:10:50 [EnvInject] - Variables injected successfully. 16:10:50 [policy-pap-master-project-csit-verify-pap] $ /bin/sh -xe /tmp/jenkins5761896524239907384.sh 16:10:50 + /w/workspace/policy-pap-master-project-csit-verify-pap/csit/run-project-csit.sh pap 16:10:50 + set +u 16:10:50 + save_set 16:10:50 + RUN_CSIT_SAVE_SET=ehxB 16:10:50 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 16:10:50 + '[' 1 -eq 0 ']' 16:10:50 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 16:10:50 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 16:10:50 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 16:10:50 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts 16:10:50 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts 16:10:50 + export ROBOT_VARIABLES= 16:10:50 + ROBOT_VARIABLES= 16:10:50 + export PROJECT=pap 16:10:50 + PROJECT=pap 16:10:50 + cd /w/workspace/policy-pap-master-project-csit-verify-pap 16:10:50 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 16:10:50 + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 16:10:50 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh 16:10:50 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh ']' 16:10:50 + relax_set 16:10:50 + set +e 16:10:50 + set +o pipefail 16:10:50 + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh 16:10:50 ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 16:10:50 +++ mktemp -d 16:10:50 ++ ROBOT_VENV=/tmp/tmp.AcFh82RfeE 16:10:50 ++ echo ROBOT_VENV=/tmp/tmp.AcFh82RfeE 16:10:50 +++ python3 --version 16:10:50 ++ echo 'Python version is: Python 3.6.9' 16:10:50 Python version is: Python 3.6.9 16:10:50 ++ python3 -m venv --clear /tmp/tmp.AcFh82RfeE 16:10:52 ++ source /tmp/tmp.AcFh82RfeE/bin/activate 16:10:52 +++ deactivate nondestructive 16:10:52 +++ '[' -n '' ']' 16:10:52 +++ '[' -n '' ']' 16:10:52 +++ '[' -n /bin/bash -o -n '' ']' 16:10:52 +++ hash -r 16:10:52 +++ '[' -n '' ']' 16:10:52 +++ unset VIRTUAL_ENV 16:10:52 +++ '[' '!' nondestructive = nondestructive ']' 16:10:52 +++ VIRTUAL_ENV=/tmp/tmp.AcFh82RfeE 16:10:52 +++ export VIRTUAL_ENV 16:10:52 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 16:10:52 +++ PATH=/tmp/tmp.AcFh82RfeE/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 16:10:52 +++ export PATH 16:10:52 +++ '[' -n '' ']' 16:10:52 +++ '[' -z '' ']' 16:10:52 +++ _OLD_VIRTUAL_PS1= 16:10:52 +++ '[' 'x(tmp.AcFh82RfeE) ' '!=' x ']' 16:10:52 +++ PS1='(tmp.AcFh82RfeE) ' 16:10:52 +++ export PS1 16:10:52 +++ '[' -n /bin/bash -o -n '' ']' 16:10:52 +++ hash -r 16:10:52 ++ set -exu 16:10:52 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 16:10:55 ++ echo 'Installing Python Requirements' 16:10:55 Installing Python Requirements 16:10:55 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/pylibs.txt 16:11:14 ++ python3 -m pip -qq freeze 16:11:14 bcrypt==4.0.1 16:11:14 beautifulsoup4==4.12.3 16:11:14 bitarray==2.9.2 16:11:14 certifi==2024.2.2 16:11:14 cffi==1.15.1 16:11:14 charset-normalizer==2.0.12 16:11:14 cryptography==40.0.2 16:11:14 decorator==5.1.1 16:11:14 elasticsearch==7.17.9 16:11:14 elasticsearch-dsl==7.4.1 16:11:14 enum34==1.1.10 16:11:14 idna==3.6 16:11:14 importlib-resources==5.4.0 16:11:14 ipaddr==2.2.0 16:11:14 isodate==0.6.1 16:11:14 jmespath==0.10.0 16:11:14 jsonpatch==1.32 16:11:14 jsonpath-rw==1.4.0 16:11:14 jsonpointer==2.3 16:11:14 lxml==5.1.0 16:11:14 netaddr==0.8.0 16:11:14 netifaces==0.11.0 16:11:14 odltools==0.1.28 16:11:14 paramiko==3.4.0 16:11:14 pkg_resources==0.0.0 16:11:14 ply==3.11 16:11:14 pyang==2.6.0 16:11:14 pyangbind==0.8.1 16:11:14 pycparser==2.21 16:11:14 pyhocon==0.3.60 16:11:14 PyNaCl==1.5.0 16:11:14 pyparsing==3.1.2 16:11:14 python-dateutil==2.9.0.post0 16:11:14 regex==2023.8.8 16:11:14 requests==2.27.1 16:11:14 robotframework==6.1.1 16:11:14 robotframework-httplibrary==0.4.2 16:11:14 robotframework-pythonlibcore==3.0.0 16:11:14 robotframework-requests==0.9.4 16:11:14 robotframework-selenium2library==3.0.0 16:11:14 robotframework-seleniumlibrary==5.1.3 16:11:14 robotframework-sshlibrary==3.8.0 16:11:14 scapy==2.5.0 16:11:14 scp==0.14.5 16:11:14 selenium==3.141.0 16:11:14 six==1.16.0 16:11:14 soupsieve==2.3.2.post1 16:11:14 urllib3==1.26.18 16:11:14 waitress==2.0.0 16:11:14 WebOb==1.8.7 16:11:14 WebTest==3.0.0 16:11:14 zipp==3.6.0 16:11:14 ++ mkdir -p /tmp/tmp.AcFh82RfeE/src/onap 16:11:14 ++ rm -rf /tmp/tmp.AcFh82RfeE/src/onap/testsuite 16:11:14 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 16:11:20 ++ echo 'Installing python confluent-kafka library' 16:11:20 Installing python confluent-kafka library 16:11:20 ++ python3 -m pip install -qq confluent-kafka 16:11:21 ++ echo 'Uninstall docker-py and reinstall docker.' 16:11:21 Uninstall docker-py and reinstall docker. 16:11:21 ++ python3 -m pip uninstall -y -qq docker 16:11:21 ++ python3 -m pip install -U -qq docker 16:11:23 ++ python3 -m pip -qq freeze 16:11:23 bcrypt==4.0.1 16:11:23 beautifulsoup4==4.12.3 16:11:23 bitarray==2.9.2 16:11:23 certifi==2024.2.2 16:11:23 cffi==1.15.1 16:11:23 charset-normalizer==2.0.12 16:11:23 confluent-kafka==2.3.0 16:11:23 cryptography==40.0.2 16:11:23 decorator==5.1.1 16:11:23 deepdiff==5.7.0 16:11:23 dnspython==2.2.1 16:11:23 docker==5.0.3 16:11:23 elasticsearch==7.17.9 16:11:23 elasticsearch-dsl==7.4.1 16:11:23 enum34==1.1.10 16:11:23 future==1.0.0 16:11:23 idna==3.6 16:11:23 importlib-resources==5.4.0 16:11:23 ipaddr==2.2.0 16:11:23 isodate==0.6.1 16:11:23 Jinja2==3.0.3 16:11:23 jmespath==0.10.0 16:11:23 jsonpatch==1.32 16:11:23 jsonpath-rw==1.4.0 16:11:23 jsonpointer==2.3 16:11:23 kafka-python==2.0.2 16:11:23 lxml==5.1.0 16:11:23 MarkupSafe==2.0.1 16:11:23 more-itertools==5.0.0 16:11:23 netaddr==0.8.0 16:11:23 netifaces==0.11.0 16:11:23 odltools==0.1.28 16:11:23 ordered-set==4.0.2 16:11:23 paramiko==3.4.0 16:11:23 pbr==6.0.0 16:11:23 pkg_resources==0.0.0 16:11:23 ply==3.11 16:11:23 protobuf==3.19.6 16:11:23 pyang==2.6.0 16:11:23 pyangbind==0.8.1 16:11:23 pycparser==2.21 16:11:23 pyhocon==0.3.60 16:11:23 PyNaCl==1.5.0 16:11:23 pyparsing==3.1.2 16:11:23 python-dateutil==2.9.0.post0 16:11:23 PyYAML==6.0.1 16:11:23 regex==2023.8.8 16:11:23 requests==2.27.1 16:11:23 robotframework==6.1.1 16:11:23 robotframework-httplibrary==0.4.2 16:11:23 robotframework-onap==0.6.0.dev105 16:11:23 robotframework-pythonlibcore==3.0.0 16:11:23 robotframework-requests==0.9.4 16:11:23 robotframework-selenium2library==3.0.0 16:11:23 robotframework-seleniumlibrary==5.1.3 16:11:23 robotframework-sshlibrary==3.8.0 16:11:23 robotlibcore-temp==1.0.2 16:11:23 scapy==2.5.0 16:11:23 scp==0.14.5 16:11:23 selenium==3.141.0 16:11:23 six==1.16.0 16:11:23 soupsieve==2.3.2.post1 16:11:23 urllib3==1.26.18 16:11:23 waitress==2.0.0 16:11:23 WebOb==1.8.7 16:11:23 websocket-client==1.3.1 16:11:23 WebTest==3.0.0 16:11:23 zipp==3.6.0 16:11:23 ++ uname 16:11:23 ++ grep -q Linux 16:11:23 ++ sudo apt-get -y -qq install libxml2-utils 16:11:23 + load_set 16:11:23 + _setopts=ehuxB 16:11:23 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 16:11:23 ++ tr : ' ' 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o braceexpand 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o hashall 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o interactive-comments 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o nounset 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o xtrace 16:11:23 ++ echo ehuxB 16:11:23 ++ sed 's/./& /g' 16:11:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:11:23 + set +e 16:11:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:11:23 + set +h 16:11:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:11:23 + set +u 16:11:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:11:23 + set +x 16:11:23 + source_safely /tmp/tmp.AcFh82RfeE/bin/activate 16:11:23 + '[' -z /tmp/tmp.AcFh82RfeE/bin/activate ']' 16:11:23 + relax_set 16:11:23 + set +e 16:11:23 + set +o pipefail 16:11:23 + . /tmp/tmp.AcFh82RfeE/bin/activate 16:11:23 ++ deactivate nondestructive 16:11:23 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ']' 16:11:23 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 16:11:23 ++ export PATH 16:11:23 ++ unset _OLD_VIRTUAL_PATH 16:11:23 ++ '[' -n '' ']' 16:11:23 ++ '[' -n /bin/bash -o -n '' ']' 16:11:23 ++ hash -r 16:11:23 ++ '[' -n '' ']' 16:11:23 ++ unset VIRTUAL_ENV 16:11:23 ++ '[' '!' nondestructive = nondestructive ']' 16:11:23 ++ VIRTUAL_ENV=/tmp/tmp.AcFh82RfeE 16:11:23 ++ export VIRTUAL_ENV 16:11:23 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 16:11:23 ++ PATH=/tmp/tmp.AcFh82RfeE/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 16:11:23 ++ export PATH 16:11:23 ++ '[' -n '' ']' 16:11:23 ++ '[' -z '' ']' 16:11:23 ++ _OLD_VIRTUAL_PS1='(tmp.AcFh82RfeE) ' 16:11:23 ++ '[' 'x(tmp.AcFh82RfeE) ' '!=' x ']' 16:11:23 ++ PS1='(tmp.AcFh82RfeE) (tmp.AcFh82RfeE) ' 16:11:23 ++ export PS1 16:11:23 ++ '[' -n /bin/bash -o -n '' ']' 16:11:23 ++ hash -r 16:11:23 + load_set 16:11:23 + _setopts=hxB 16:11:23 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:11:23 ++ tr : ' ' 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o braceexpand 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o hashall 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o interactive-comments 16:11:23 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:11:23 + set +o xtrace 16:11:23 ++ echo hxB 16:11:23 ++ sed 's/./& /g' 16:11:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:11:23 + set +h 16:11:23 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:11:23 + set +x 16:11:23 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests 16:11:23 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests 16:11:23 + export TEST_OPTIONS= 16:11:23 + TEST_OPTIONS= 16:11:23 ++ mktemp -d 16:11:23 + WORKDIR=/tmp/tmp.STfXWGE5EF 16:11:23 + cd /tmp/tmp.STfXWGE5EF 16:11:23 + docker login -u docker -p docker nexus3.onap.org:10001 16:11:24 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 16:11:24 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 16:11:24 Configure a credential helper to remove this warning. See 16:11:24 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 16:11:24 16:11:24 Login Succeeded 16:11:24 + SETUP=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 16:11:24 + '[' -f /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' 16:11:24 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh' 16:11:24 Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 16:11:24 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 16:11:24 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' 16:11:24 + relax_set 16:11:24 + set +e 16:11:24 + set +o pipefail 16:11:24 + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 16:11:24 ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/node-templates.sh 16:11:24 +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 16:11:24 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-verify-pap/.gitreview 16:11:24 +++ GERRIT_BRANCH=master 16:11:24 +++ echo GERRIT_BRANCH=master 16:11:24 GERRIT_BRANCH=master 16:11:24 +++ rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models 16:11:24 +++ mkdir /w/workspace/policy-pap-master-project-csit-verify-pap/models 16:11:24 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-verify-pap/models 16:11:24 Cloning into '/w/workspace/policy-pap-master-project-csit-verify-pap/models'... 16:11:25 +++ export DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies 16:11:25 +++ DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies 16:11:25 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 16:11:25 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 16:11:25 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 16:11:25 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 16:11:25 ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/compose/start-compose.sh apex-pdp --grafana 16:11:25 +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 16:11:25 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose 16:11:25 +++ grafana=false 16:11:25 +++ gui=false 16:11:25 +++ [[ 2 -gt 0 ]] 16:11:25 +++ key=apex-pdp 16:11:25 +++ case $key in 16:11:25 +++ echo apex-pdp 16:11:25 apex-pdp 16:11:25 +++ component=apex-pdp 16:11:25 +++ shift 16:11:25 +++ [[ 1 -gt 0 ]] 16:11:25 +++ key=--grafana 16:11:25 +++ case $key in 16:11:25 +++ grafana=true 16:11:25 +++ shift 16:11:25 +++ [[ 0 -gt 0 ]] 16:11:25 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose 16:11:25 +++ echo 'Configuring docker compose...' 16:11:25 Configuring docker compose... 16:11:25 +++ source export-ports.sh 16:11:25 +++ source get-versions.sh 16:11:27 +++ '[' -z pap ']' 16:11:27 +++ '[' -n apex-pdp ']' 16:11:27 +++ '[' apex-pdp == logs ']' 16:11:27 +++ '[' true = true ']' 16:11:27 +++ echo 'Starting apex-pdp application with Grafana' 16:11:27 Starting apex-pdp application with Grafana 16:11:27 +++ docker-compose up -d apex-pdp grafana 16:11:27 Creating network "compose_default" with the default driver 16:11:28 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 16:11:28 latest: Pulling from prom/prometheus 16:11:31 Digest: sha256:5ccad477d0057e62a7cd1981ffcc43785ac10c5a35522dc207466ff7e7ec845f 16:11:31 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 16:11:31 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 16:11:31 latest: Pulling from grafana/grafana 16:11:36 Digest: sha256:f9811e4e687ffecf1a43adb9b64096c50bc0d7a782f8608530f478b6542de7d5 16:11:36 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 16:11:36 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 16:11:36 10.10.2: Pulling from mariadb 16:11:41 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 16:11:41 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 16:11:41 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 16:11:41 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 16:11:45 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 16:11:45 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 16:11:45 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 16:11:46 latest: Pulling from confluentinc/cp-zookeeper 16:11:59 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 16:11:59 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 16:11:59 Pulling kafka (confluentinc/cp-kafka:latest)... 16:11:59 latest: Pulling from confluentinc/cp-kafka 16:12:12 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 16:12:12 Status: Downloaded newer image for confluentinc/cp-kafka:latest 16:12:14 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 16:12:14 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 16:12:26 Digest: sha256:ab0a3a1ee55f1bb0f1d1fd16687dc5c3f589ad75c369848b1db1aef2f7ab963c 16:12:26 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 16:12:26 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 16:12:28 3.1.2-SNAPSHOT: Pulling from onap/policy-api 16:12:32 Digest: sha256:fdc9aa26830be0af882248f5f576f0e9466b8e17ff432e8618d01432efa85803 16:12:32 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 16:12:32 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 16:12:32 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 16:12:36 Digest: sha256:5e7bdea16830f0dd3e16df519f0efbee38922192c2a79297bcac6699fa44e067 16:12:36 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 16:12:36 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 16:12:36 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 16:12:43 Digest: sha256:3f9880e060c3465862043c69561fa1d43ab448175d1adf3efd53d751d3b9947d 16:12:43 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 16:12:43 Creating prometheus ... 16:12:43 Creating compose_zookeeper_1 ... 16:12:43 Creating simulator ... 16:12:43 Creating mariadb ... 16:12:57 Creating prometheus ... done 16:12:57 Creating grafana ... 16:12:58 Creating mariadb ... done 16:12:58 Creating policy-db-migrator ... 16:12:58 Creating policy-db-migrator ... done 16:12:58 Creating policy-api ... 16:12:59 Creating policy-api ... done 16:13:00 Creating grafana ... done 16:13:02 Creating simulator ... done 16:13:03 Creating compose_zookeeper_1 ... done 16:13:03 Creating kafka ... 16:13:04 Creating kafka ... done 16:13:04 Creating policy-pap ... 16:13:05 Creating policy-pap ... done 16:13:05 Creating policy-apex-pdp ... 16:13:06 Creating policy-apex-pdp ... done 16:13:06 +++ echo 'Prometheus server: http://localhost:30259' 16:13:06 Prometheus server: http://localhost:30259 16:13:06 +++ echo 'Grafana server: http://localhost:30269' 16:13:06 Grafana server: http://localhost:30269 16:13:06 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap 16:13:06 ++ sleep 10 16:13:16 ++ unset http_proxy https_proxy 16:13:16 ++ bash /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 16:13:16 Waiting for REST to come up on localhost port 30003... 16:13:16 NAMES STATUS 16:13:16 policy-apex-pdp Up 10 seconds 16:13:16 policy-pap Up 11 seconds 16:13:16 kafka Up 11 seconds 16:13:16 policy-api Up 16 seconds 16:13:16 grafana Up 15 seconds 16:13:16 mariadb Up 18 seconds 16:13:16 simulator Up 13 seconds 16:13:16 compose_zookeeper_1 Up 13 seconds 16:13:16 prometheus Up 19 seconds 16:13:21 NAMES STATUS 16:13:21 policy-apex-pdp Up 15 seconds 16:13:21 policy-pap Up 16 seconds 16:13:21 kafka Up 17 seconds 16:13:21 policy-api Up 21 seconds 16:13:21 grafana Up 20 seconds 16:13:21 mariadb Up 23 seconds 16:13:21 simulator Up 19 seconds 16:13:21 compose_zookeeper_1 Up 18 seconds 16:13:21 prometheus Up 24 seconds 16:13:26 NAMES STATUS 16:13:26 policy-apex-pdp Up 20 seconds 16:13:26 policy-pap Up 21 seconds 16:13:26 kafka Up 22 seconds 16:13:26 policy-api Up 26 seconds 16:13:26 grafana Up 25 seconds 16:13:26 mariadb Up 28 seconds 16:13:26 simulator Up 24 seconds 16:13:26 compose_zookeeper_1 Up 23 seconds 16:13:26 prometheus Up 29 seconds 16:13:31 NAMES STATUS 16:13:31 policy-apex-pdp Up 25 seconds 16:13:31 policy-pap Up 26 seconds 16:13:31 kafka Up 27 seconds 16:13:31 policy-api Up 31 seconds 16:13:31 grafana Up 30 seconds 16:13:31 mariadb Up 33 seconds 16:13:31 simulator Up 29 seconds 16:13:31 compose_zookeeper_1 Up 28 seconds 16:13:31 prometheus Up 34 seconds 16:13:36 NAMES STATUS 16:13:36 policy-apex-pdp Up 30 seconds 16:13:36 policy-pap Up 31 seconds 16:13:36 kafka Up 32 seconds 16:13:36 policy-api Up 36 seconds 16:13:36 grafana Up 35 seconds 16:13:36 mariadb Up 38 seconds 16:13:36 simulator Up 34 seconds 16:13:36 compose_zookeeper_1 Up 33 seconds 16:13:36 prometheus Up 39 seconds 16:13:36 ++ export 'SUITES=pap-test.robot 16:13:36 pap-slas.robot' 16:13:36 ++ SUITES='pap-test.robot 16:13:36 pap-slas.robot' 16:13:36 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 16:13:36 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' 16:13:36 + load_set 16:13:36 + _setopts=hxB 16:13:36 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:13:36 ++ tr : ' ' 16:13:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:13:36 + set +o braceexpand 16:13:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:13:36 + set +o hashall 16:13:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:13:36 + set +o interactive-comments 16:13:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:13:36 + set +o xtrace 16:13:36 ++ echo hxB 16:13:36 ++ sed 's/./& /g' 16:13:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:13:36 + set +h 16:13:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:13:36 + set +x 16:13:36 + tee /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 16:13:36 + docker_stats 16:13:36 ++ uname -s 16:13:36 + '[' Linux == Darwin ']' 16:13:36 + sh -c 'top -bn1 | head -3' 16:13:36 top - 16:13:36 up 4 min, 0 users, load average: 3.07, 1.37, 0.55 16:13:36 Tasks: 212 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 16:13:36 %Cpu(s): 12.6 us, 2.8 sy, 0.0 ni, 80.0 id, 4.6 wa, 0.0 hi, 0.1 si, 0.1 st 16:13:36 + echo 16:13:36 + sh -c 'free -h' 16:13:36 16:13:36 total used free shared buff/cache available 16:13:36 Mem: 31G 2.6G 22G 1.3M 6.4G 28G 16:13:36 Swap: 1.0G 0B 1.0G 16:13:36 + echo 16:13:36 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 16:13:36 16:13:36 NAMES STATUS 16:13:36 policy-apex-pdp Up 30 seconds 16:13:36 policy-pap Up 31 seconds 16:13:36 kafka Up 32 seconds 16:13:36 policy-api Up 36 seconds 16:13:36 grafana Up 35 seconds 16:13:36 mariadb Up 38 seconds 16:13:36 simulator Up 34 seconds 16:13:36 compose_zookeeper_1 Up 33 seconds 16:13:36 prometheus Up 39 seconds 16:13:36 + echo 16:13:36 + docker stats --no-stream 16:13:36 16:13:39 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 16:13:39 9a264d855d47 policy-apex-pdp 255.74% 197.3MiB / 31.41GiB 0.61% 7.05kB / 6.71kB 0B / 0B 48 16:13:39 05b9d658199e policy-pap 7.89% 500.6MiB / 31.41GiB 1.56% 28.5kB / 30.7kB 0B / 153MB 61 16:13:39 5f5c29454339 kafka 60.57% 377.5MiB / 31.41GiB 1.17% 72.9kB / 74.5kB 0B / 500kB 83 16:13:39 38c95baac321 policy-api 0.11% 569.3MiB / 31.41GiB 1.77% 1e+03kB / 710kB 0B / 0B 57 16:13:39 3d1e71f089b6 grafana 0.04% 57.05MiB / 31.41GiB 0.18% 19kB / 3.71kB 0B / 24.8MB 18 16:13:39 18951d56adc1 mariadb 0.02% 102.1MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 68MB 37 16:13:39 75647129bdd1 simulator 0.07% 121.7MiB / 31.41GiB 0.38% 1.15kB / 0B 0B / 0B 76 16:13:39 0f10c46aa465 compose_zookeeper_1 0.30% 104.4MiB / 31.41GiB 0.32% 56.9kB / 51.9kB 0B / 397kB 60 16:13:39 a4be7e20cad5 prometheus 0.00% 18.43MiB / 31.41GiB 0.06% 1.59kB / 0B 156kB / 0B 13 16:13:39 + echo 16:13:39 16:13:39 + cd /tmp/tmp.STfXWGE5EF 16:13:39 + echo 'Reading the testplan:' 16:13:39 Reading the testplan: 16:13:39 + echo 'pap-test.robot 16:13:39 pap-slas.robot' 16:13:39 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 16:13:39 + sed 's|^|/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/|' 16:13:39 + cat testplan.txt 16:13:39 /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot 16:13:39 /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot 16:13:39 ++ xargs 16:13:39 + SUITES='/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot' 16:13:39 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 16:13:39 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' 16:13:39 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 16:13:39 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 16:13:39 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ...' 16:13:39 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ... 16:13:39 + relax_set 16:13:39 + set +e 16:13:39 + set +o pipefail 16:13:39 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot 16:13:39 ============================================================================== 16:13:39 pap 16:13:39 ============================================================================== 16:13:39 pap.Pap-Test 16:13:39 ============================================================================== 16:13:40 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 16:13:40 ------------------------------------------------------------------------------ 16:13:41 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 16:13:41 ------------------------------------------------------------------------------ 16:13:41 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 16:13:41 ------------------------------------------------------------------------------ 16:13:42 Healthcheck :: Verify policy pap health check | PASS | 16:13:42 ------------------------------------------------------------------------------ 16:14:02 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 16:14:02 ------------------------------------------------------------------------------ 16:14:02 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 16:14:02 ------------------------------------------------------------------------------ 16:14:03 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 16:14:03 ------------------------------------------------------------------------------ 16:14:03 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 16:14:03 ------------------------------------------------------------------------------ 16:14:03 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 16:14:03 ------------------------------------------------------------------------------ 16:14:03 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 16:14:03 ------------------------------------------------------------------------------ 16:14:04 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 16:14:04 ------------------------------------------------------------------------------ 16:14:04 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 16:14:04 ------------------------------------------------------------------------------ 16:14:04 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 16:14:04 ------------------------------------------------------------------------------ 16:14:04 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 16:14:04 ------------------------------------------------------------------------------ 16:14:04 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 16:14:04 ------------------------------------------------------------------------------ 16:14:05 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 16:14:05 ------------------------------------------------------------------------------ 16:14:05 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 16:14:05 ------------------------------------------------------------------------------ 16:14:25 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 16:14:25 DEPLOYMENT != UNDEPLOYMENT 16:14:25 ------------------------------------------------------------------------------ 16:14:25 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 16:14:25 ------------------------------------------------------------------------------ 16:14:25 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 16:14:25 ------------------------------------------------------------------------------ 16:14:26 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 16:14:26 ------------------------------------------------------------------------------ 16:14:26 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 16:14:26 ------------------------------------------------------------------------------ 16:14:26 pap.Pap-Test | FAIL | 16:14:26 22 tests, 21 passed, 1 failed 16:14:26 ============================================================================== 16:14:26 pap.Pap-Slas 16:14:26 ============================================================================== 16:15:26 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 16:15:26 ------------------------------------------------------------------------------ 16:15:26 pap.Pap-Slas | PASS | 16:15:26 8 tests, 8 passed, 0 failed 16:15:26 ============================================================================== 16:15:26 pap | FAIL | 16:15:26 30 tests, 29 passed, 1 failed 16:15:26 ============================================================================== 16:15:26 Output: /tmp/tmp.STfXWGE5EF/output.xml 16:15:26 Log: /tmp/tmp.STfXWGE5EF/log.html 16:15:26 Report: /tmp/tmp.STfXWGE5EF/report.html 16:15:26 + RESULT=1 16:15:26 + load_set 16:15:26 + _setopts=hxB 16:15:26 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:15:26 ++ tr : ' ' 16:15:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:15:26 + set +o braceexpand 16:15:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:15:26 + set +o hashall 16:15:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:15:26 + set +o interactive-comments 16:15:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:15:26 + set +o xtrace 16:15:26 ++ echo hxB 16:15:26 ++ sed 's/./& /g' 16:15:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:15:26 + set +h 16:15:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:15:26 + set +x 16:15:26 + echo 'RESULT: 1' 16:15:26 RESULT: 1 16:15:26 + exit 1 16:15:26 + on_exit 16:15:26 + rc=1 16:15:26 + [[ -n /w/workspace/policy-pap-master-project-csit-verify-pap ]] 16:15:26 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 16:15:26 NAMES STATUS 16:15:26 policy-apex-pdp Up 2 minutes 16:15:26 policy-pap Up 2 minutes 16:15:26 kafka Up 2 minutes 16:15:26 policy-api Up 2 minutes 16:15:26 grafana Up 2 minutes 16:15:26 mariadb Up 2 minutes 16:15:26 simulator Up 2 minutes 16:15:26 compose_zookeeper_1 Up 2 minutes 16:15:26 prometheus Up 2 minutes 16:15:26 + docker_stats 16:15:26 ++ uname -s 16:15:26 + '[' Linux == Darwin ']' 16:15:26 + sh -c 'top -bn1 | head -3' 16:15:26 top - 16:15:26 up 6 min, 0 users, load average: 0.71, 1.08, 0.54 16:15:26 Tasks: 202 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 16:15:26 %Cpu(s): 10.5 us, 2.1 sy, 0.0 ni, 83.7 id, 3.6 wa, 0.0 hi, 0.0 si, 0.1 st 16:15:26 + echo 16:15:26 16:15:26 + sh -c 'free -h' 16:15:26 total used free shared buff/cache available 16:15:26 Mem: 31G 2.8G 22G 1.3M 6.5G 28G 16:15:26 Swap: 1.0G 0B 1.0G 16:15:26 + echo 16:15:26 16:15:26 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 16:15:26 NAMES STATUS 16:15:26 policy-apex-pdp Up 2 minutes 16:15:26 policy-pap Up 2 minutes 16:15:26 kafka Up 2 minutes 16:15:26 policy-api Up 2 minutes 16:15:26 grafana Up 2 minutes 16:15:26 mariadb Up 2 minutes 16:15:26 simulator Up 2 minutes 16:15:26 compose_zookeeper_1 Up 2 minutes 16:15:26 prometheus Up 2 minutes 16:15:26 + echo 16:15:26 16:15:26 + docker stats --no-stream 16:15:29 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 16:15:29 9a264d855d47 policy-apex-pdp 0.39% 186.7MiB / 31.41GiB 0.58% 55.9kB / 89.8kB 0B / 0B 52 16:15:29 05b9d658199e policy-pap 0.52% 496.9MiB / 31.41GiB 1.54% 2.33MB / 800kB 0B / 153MB 65 16:15:29 5f5c29454339 kafka 1.15% 400.8MiB / 31.41GiB 1.25% 240kB / 215kB 0B / 606kB 85 16:15:29 38c95baac321 policy-api 0.09% 570.6MiB / 31.41GiB 1.77% 2.49MB / 1.26MB 0B / 0B 60 16:15:29 3d1e71f089b6 grafana 0.03% 64.09MiB / 31.41GiB 0.20% 20kB / 4.7kB 0B / 24.8MB 18 16:15:29 18951d56adc1 mariadb 0.01% 103.5MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.3MB 28 16:15:29 75647129bdd1 simulator 0.07% 121.9MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 78 16:15:29 0f10c46aa465 compose_zookeeper_1 0.09% 101.4MiB / 31.41GiB 0.32% 59.8kB / 53.4kB 0B / 397kB 60 16:15:29 a4be7e20cad5 prometheus 0.10% 24.72MiB / 31.41GiB 0.08% 170kB / 10.8kB 156kB / 0B 13 16:15:29 + echo 16:15:29 16:15:29 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh 16:15:29 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh ']' 16:15:29 + relax_set 16:15:29 + set +e 16:15:29 + set +o pipefail 16:15:29 + . /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh 16:15:29 ++ echo 'Shut down started!' 16:15:29 Shut down started! 16:15:29 ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 16:15:29 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose 16:15:29 ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose 16:15:29 ++ source export-ports.sh 16:15:29 ++ source get-versions.sh 16:15:31 ++ echo 'Collecting logs from docker compose containers...' 16:15:31 Collecting logs from docker compose containers... 16:15:31 ++ docker-compose logs 16:15:32 ++ cat docker_compose.log 16:15:32 Attaching to policy-apex-pdp, policy-pap, kafka, policy-api, policy-db-migrator, grafana, mariadb, simulator, compose_zookeeper_1, prometheus 16:15:32 zookeeper_1 | ===> User 16:15:32 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 16:15:32 zookeeper_1 | ===> Configuring ... 16:15:32 zookeeper_1 | ===> Running preflight checks ... 16:15:32 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 16:15:32 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 16:15:32 zookeeper_1 | ===> Launching ... 16:15:32 zookeeper_1 | ===> Launching zookeeper ... 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,748] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,754] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,754] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,754] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,754] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,755] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,755] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,755] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,755] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,757] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,757] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,757] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,757] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,757] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,757] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,757] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,775] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,778] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,778] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,781] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,790] INFO (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,790] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,790] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,791] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,791] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,791] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,791] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,791] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,791] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,791] INFO (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:host.name=0f10c46aa465 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,792] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,793] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,793] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,793] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,793] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,794] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,794] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,795] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,795] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,796] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,796] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,796] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,796] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,796] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,796] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,798] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,798] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,799] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,799] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,799] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,819] INFO Logging initialized @490ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,907] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,908] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,927] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,964] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,964] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,965] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,969] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,978] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,996] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,996] INFO Started @667ms (org.eclipse.jetty.server.Server) 16:15:32 zookeeper_1 | [2024-03-20 16:13:06,996] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,001] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,001] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,003] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,004] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,017] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,017] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,018] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,018] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,023] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,023] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,026] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,026] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,027] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,035] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,036] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,047] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 16:15:32 zookeeper_1 | [2024-03-20 16:13:07,048] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 16:15:32 zookeeper_1 | [2024-03-20 16:13:08,475] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073321298Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2024-03-20T16:13:01Z 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073845263Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073861823Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073869553Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073876373Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073884873Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073900273Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073907443Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073915253Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073926474Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073938904Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073955764Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073962964Z level=info msg=Target target=[all] 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073975244Z level=info msg="Path Home" path=/usr/share/grafana 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073986174Z level=info msg="Path Data" path=/var/lib/grafana 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073992354Z level=info msg="Path Logs" path=/var/log/grafana 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.073999114Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.074006594Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 16:15:32 grafana | logger=settings t=2024-03-20T16:13:01.074012964Z level=info msg="App mode production" 16:15:32 grafana | logger=sqlstore t=2024-03-20T16:13:01.074677821Z level=info msg="Connecting to DB" dbtype=sqlite3 16:15:32 grafana | logger=sqlstore t=2024-03-20T16:13:01.074720832Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.075974174Z level=info msg="Starting DB migrations" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.076855823Z level=info msg="Executing migration" id="create migration_log table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.077818863Z level=info msg="Migration successfully executed" id="create migration_log table" duration=962.35µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.082559691Z level=info msg="Executing migration" id="create user table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.083128816Z level=info msg="Migration successfully executed" id="create user table" duration=569.146µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.089006475Z level=info msg="Executing migration" id="add unique index user.login" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.089805533Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=798.988µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.093032685Z level=info msg="Executing migration" id="add unique index user.email" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.094380449Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.346774ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.098027805Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.099692812Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.664357ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.105503161Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.106122087Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=619.246µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.108559841Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.111046956Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.486335ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.114131637Z level=info msg="Executing migration" id="create user table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.114925265Z level=info msg="Migration successfully executed" id="create user table v2" duration=793.768µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.120044056Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.121280079Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.235353ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.124479591Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.125846075Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.365404ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.128993426Z level=info msg="Executing migration" id="copy data_source v1 to v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.12939875Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=405.634µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.132070947Z level=info msg="Executing migration" id="Drop old table user_v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.132617803Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=546.376µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.138471991Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.139555612Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.082581ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.142546602Z level=info msg="Executing migration" id="Update user table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.142577062Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.21µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.145956007Z level=info msg="Executing migration" id="Add last_seen_at column to user" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.147122118Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.165551ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.153052088Z level=info msg="Executing migration" id="Add missing user data" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.153358401Z level=info msg="Migration successfully executed" id="Add missing user data" duration=305.503µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.156622054Z level=info msg="Executing migration" id="Add is_disabled column to user" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.158587283Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.964849ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.162150469Z level=info msg="Executing migration" id="Add index user.login/user.email" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.162969067Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=817.728µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.166026608Z level=info msg="Executing migration" id="Add is_service_account column to user" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.167282511Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.255333ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.173393462Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.182466693Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.069391ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.185729386Z level=info msg="Executing migration" id="Add uid column to user" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.186905777Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.175792ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.189884887Z level=info msg="Executing migration" id="Update uid column values for users" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.190076259Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=191.882µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.192641255Z level=info msg="Executing migration" id="Add unique index user_uid" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.193576345Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=935.57µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.199726216Z level=info msg="Executing migration" id="create temp user table v1-7" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.200506574Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=780.438µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.203614265Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.204386113Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=771.228µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.207636925Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.208352183Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=715.528µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.213950959Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.214617976Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=666.817µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.219423624Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.220829398Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.406424ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.224682837Z level=info msg="Executing migration" id="Update temp_user table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.224714957Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=33.251µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.231076941Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.231885049Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=804.548µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.23502786Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.236387264Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.358724ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.239365644Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.240261353Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=896.409µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.246466625Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.247211623Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=668.767µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.250448155Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.253837749Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.386764ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.256775348Z level=info msg="Executing migration" id="create temp_user v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.257729618Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=953.73µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.263470726Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.264812649Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.339843ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.268018111Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.269287034Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.268523ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.273151533Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.274077202Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=926.779µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.279168923Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.279883571Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=714.528µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.283686819Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.284080503Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=393.224µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.28677689Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.287249374Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=472.144µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.290362876Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.290943301Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=579.985µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.296112333Z level=info msg="Executing migration" id="create star table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.297074323Z level=info msg="Migration successfully executed" id="create star table" duration=961.74µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.300008103Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.30071405Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=705.437µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.303644819Z level=info msg="Executing migration" id="create org table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.304417597Z level=info msg="Migration successfully executed" id="create org table v1" duration=770.378µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.3076862Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.308836871Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.148071ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.31376751Z level=info msg="Executing migration" id="create org_user table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.314808901Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.041031ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.317839931Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.318583139Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=741.138µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.321253136Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.321996323Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=743.007µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.324873512Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.325989583Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.115291ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.330729291Z level=info msg="Executing migration" id="Update org table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.330767081Z level=info msg="Migration successfully executed" id="Update org table charset" duration=39.83µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.33365443Z level=info msg="Executing migration" id="Update org_user table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.33367918Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=25.7µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.338115985Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.338524499Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=413.834µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.342143775Z level=info msg="Executing migration" id="create dashboard table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.343003304Z level=info msg="Migration successfully executed" id="create dashboard table" duration=859.619µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.346255127Z level=info msg="Executing migration" id="add index dashboard.account_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.347125225Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=869.828µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.351995434Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.353425939Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.429455ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.424028647Z level=info msg="Executing migration" id="create dashboard_tag table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.425199979Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.171982ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.428543252Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.429901236Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.357504ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.435464292Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.43630078Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=837.128µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.439641144Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.444745465Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.103831ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.447614764Z level=info msg="Executing migration" id="create dashboard v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.448406302Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=793.168µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.453277981Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.454060229Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=781.588µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.456959968Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.457823636Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=863.138µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.460983058Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.461369082Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=385.684µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.466607614Z level=info msg="Executing migration" id="drop table dashboard_v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.467427162Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=819.278µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.470354772Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.470501914Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=152.442µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.473682926Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.476790317Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.106911ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.486534144Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.489761267Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.224453ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.492983549Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.49601469Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.032801ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.498696067Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.499527535Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=831.508µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.504280392Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.50704197Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.759728ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.510185862Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.511598336Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.413934ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.51498158Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.515786968Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=804.838µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.520878449Z level=info msg="Executing migration" id="Update dashboard table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.520903979Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=25.66µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.524277753Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.524354204Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=77.331µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.527647707Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.530816559Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.168292ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.533863869Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.535740308Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.876549ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.540532656Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.543943151Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.410775ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.547022251Z level=info msg="Executing migration" id="Add column uid in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.549082472Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.059551ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.552442416Z level=info msg="Executing migration" id="Update uid column values in dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.552674298Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=230.972µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.557766769Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.558608558Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=841.339µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.56182339Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.563154253Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.330163ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.56677063Z level=info msg="Executing migration" id="Update dashboard title length" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.566862721Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=88.48µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.572613208Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.573481137Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=867.329µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.576678919Z level=info msg="Executing migration" id="create dashboard_provisioning" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.577398446Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=720.477µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.58076023Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.585822081Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.062031ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.589891452Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.590585989Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=694.577µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.593584519Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.594412327Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=827.478µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.597821481Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.599171165Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.349154ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.603722241Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.604259346Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=537.005µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.607950613Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.608460598Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=509.745µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.61163328Z level=info msg="Executing migration" id="Add check_sum column" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.613781621Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.147791ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.618323997Z level=info msg="Executing migration" id="Add index for dashboard_title" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.619189125Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=866.118µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.624031014Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.624332777Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=303.793µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.634048545Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 16:15:32 kafka | ===> User 16:15:32 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 16:15:32 kafka | ===> Configuring ... 16:15:32 kafka | Running in Zookeeper mode... 16:15:32 kafka | ===> Running preflight checks ... 16:15:32 kafka | ===> Check if /var/lib/kafka/data is writable ... 16:15:32 kafka | ===> Check if Zookeeper is healthy ... 16:15:32 kafka | SLF4J: Class path contains multiple SLF4J bindings. 16:15:32 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 16:15:32 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 16:15:32 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 16:15:32 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 16:15:32 kafka | [2024-03-20 16:13:08,417] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:host.name=5f5c29454339 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,418] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,419] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,419] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,419] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,419] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,419] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,419] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,419] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,422] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@2fd6b6c7 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,425] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 16:15:32 kafka | [2024-03-20 16:13:08,430] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 16:15:32 kafka | [2024-03-20 16:13:08,437] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:08,450] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:08,450] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:08,460] INFO Socket connection established, initiating session, client: /172.17.0.9:48692, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:08,516] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000402480000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:08,639] INFO Session: 0x100000402480000 closed (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:08,639] INFO EventThread shut down for session: 0x100000402480000 (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | Using log4j config /etc/kafka/log4j.properties 16:15:32 kafka | ===> Launching ... 16:15:32 kafka | ===> Launching kafka ... 16:15:32 kafka | [2024-03-20 16:13:09,292] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 16:15:32 kafka | [2024-03-20 16:13:09,601] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 16:15:32 kafka | [2024-03-20 16:13:09,669] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 16:15:32 kafka | [2024-03-20 16:13:09,670] INFO starting (kafka.server.KafkaServer) 16:15:32 kafka | [2024-03-20 16:13:09,670] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 16:15:32 kafka | [2024-03-20 16:13:09,683] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:host.name=5f5c29454339 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,688] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,690] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 16:15:32 kafka | [2024-03-20 16:13:09,694] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 16:15:32 kafka | [2024-03-20 16:13:09,700] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.634445839Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=404.164µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.637345698Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.638028445Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=683.227µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.641287217Z level=info msg="Executing migration" id="Add isPublic for dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.643631781Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.343864ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.646285928Z level=info msg="Executing migration" id="create data_source table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.647240577Z level=info msg="Migration successfully executed" id="create data_source table" duration=954.489µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.650726392Z level=info msg="Executing migration" id="add index data_source.account_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.651649331Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=923.089µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.657063136Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.658412119Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.353883ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.661571141Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.662303318Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=732.467µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.666036876Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.667615522Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.585826ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.672219348Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.679722003Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.527355ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.682938225Z level=info msg="Executing migration" id="create data_source table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.683978326Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.039061ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.687872945Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.688707394Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=833.178µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.69234917Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.693228739Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=879.419µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.696440251Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.696955126Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=514.635µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.701155628Z level=info msg="Executing migration" id="Add column with_credentials" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.70337323Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.217352ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.708058017Z level=info msg="Executing migration" id="Add secure json data column" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.710369691Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.314514ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.713437902Z level=info msg="Executing migration" id="Update data_source table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.713459562Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=22.54µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.71726714Z level=info msg="Executing migration" id="Update initial version to 1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.717406581Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=139.441µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.719736285Z level=info msg="Executing migration" id="Add read_only data column" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.723499222Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.764067ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.727326251Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.727743035Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=418.514µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.731023698Z level=info msg="Executing migration" id="Update json_data with nulls" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.73117793Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=154.612µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.734863377Z level=info msg="Executing migration" id="Add uid column" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.73719803Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.334073ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.740730385Z level=info msg="Executing migration" id="Update uid value" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.741105009Z level=info msg="Migration successfully executed" id="Update uid value" duration=378.774µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.744789896Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.746060159Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.254943ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.751735026Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.752952828Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.219612ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.756305862Z level=info msg="Executing migration" id="create api_key table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.75713062Z level=info msg="Migration successfully executed" id="create api_key table" duration=823.958µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.760466074Z level=info msg="Executing migration" id="add index api_key.account_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.761284242Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=818.009µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.76607546Z level=info msg="Executing migration" id="add index api_key.key" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.766831137Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=753.147µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.769900488Z level=info msg="Executing migration" id="add index api_key.account_id_name" 16:15:32 policy-db-migrator | Waiting for mariadb port 3306... 16:15:32 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 16:15:32 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 16:15:32 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 16:15:32 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 16:15:32 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 16:15:32 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 16:15:32 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 16:15:32 policy-db-migrator | 321 blocks 16:15:32 policy-db-migrator | Preparing upgrade release version: 0800 16:15:32 policy-db-migrator | Preparing upgrade release version: 0900 16:15:32 policy-db-migrator | Preparing upgrade release version: 1000 16:15:32 policy-db-migrator | Preparing upgrade release version: 1100 16:15:32 policy-db-migrator | Preparing upgrade release version: 1200 16:15:32 policy-db-migrator | Preparing upgrade release version: 1300 16:15:32 policy-db-migrator | Done 16:15:32 policy-db-migrator | name version 16:15:32 policy-db-migrator | policyadmin 0 16:15:32 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 16:15:32 policy-db-migrator | upgrade: 0 -> 1300 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.770702156Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=801.158µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.773939289Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.774659656Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=719.817µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.778497685Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.779227672Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=729.947µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.782596276Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.783509345Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=913.289µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.787137851Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.794349344Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.210853ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.798448325Z level=info msg="Executing migration" id="create api_key table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.799256403Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=807.058µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.802490445Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.803356144Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=865.959µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.807580196Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.808347694Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=769.098µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.811431965Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.812278383Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=846.008µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.815280583Z level=info msg="Executing migration" id="copy api_key v1 to v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.815652457Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=371.264µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.818762919Z level=info msg="Executing migration" id="Drop old table api_key_v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.819278174Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=514.995µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.823023221Z level=info msg="Executing migration" id="Update api_key table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.823049472Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.901µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.826226233Z level=info msg="Executing migration" id="Add expires to api_key table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.828822569Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.594186ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.831770309Z level=info msg="Executing migration" id="Add service account foreign key" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.834389115Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.617456ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.838229254Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.838392846Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=162.332µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.841292865Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.843731659Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.438594ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.846909091Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.849541947Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.631956ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.853455807Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.854229654Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=773.467µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.857243995Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.85778161Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=537.055µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.860417797Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.861227865Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=809.678µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.89962743Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.900918003Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.292393ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.904825162Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.90561533Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=792.528µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.90959205Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.910377278Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=784.718µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.914736652Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.914802862Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=66.99µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.917981854Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.918012724Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=31µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.920616771Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.923606701Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.98923ms 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.927026135Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.929741312Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.712447ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.933795433Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.933860714Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=65.811µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.9375123Z level=info msg="Executing migration" id="create quota table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.938215777Z level=info msg="Migration successfully executed" id="create quota table v1" duration=703.007µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.94244148Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.943232087Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=791.228µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.947878824Z level=info msg="Executing migration" id="Update quota table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.947919695Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.151µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.951209618Z level=info msg="Executing migration" id="create plugin_setting table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.952027876Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=818.698µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.955351749Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.956281249Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=929.31µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.960600782Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:01.96341783Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.816688ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.113268684Z level=info msg="Executing migration" id="Update plugin_setting table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.113340755Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=78.701µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.117199672Z level=info msg="Executing migration" id="create session table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.118123406Z level=info msg="Migration successfully executed" id="create session table" duration=923.324µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.121481746Z level=info msg="Executing migration" id="Drop old table playlist table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.121566927Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=85.491µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.12856273Z level=info msg="Executing migration" id="Drop old table playlist_item table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.128684042Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=122.152µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.132339996Z level=info msg="Executing migration" id="create playlist table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.133417752Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.076936ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.137070826Z level=info msg="Executing migration" id="create playlist item table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.138224453Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.153147ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.143631533Z level=info msg="Executing migration" id="Update playlist table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.143658354Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=26.061µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.147838165Z level=info msg="Executing migration" id="Update playlist_item table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.147883636Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=38.051µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.151369848Z level=info msg="Executing migration" id="Add playlist column created_at" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.156434972Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.063374ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.15961923Z level=info msg="Executing migration" id="Add playlist column updated_at" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.162553623Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.934344ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.168140015Z level=info msg="Executing migration" id="drop preferences table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.168219177Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=79.682µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.170855746Z level=info msg="Executing migration" id="drop preferences table v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.170926937Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=71.931µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.173698758Z level=info msg="Executing migration" id="create preferences table v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.174462919Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=765.221µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.177696737Z level=info msg="Executing migration" id="Update preferences table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.177734537Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=39.51µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.183445372Z level=info msg="Executing migration" id="Add column team_id in preferences" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.188478936Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.033224ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.191734454Z level=info msg="Executing migration" id="Update team_id column values in preferences" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.191880106Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=145.842µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.209153902Z level=info msg="Executing migration" id="Add column week_start in preferences" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.211615398Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.462096ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.216139285Z level=info msg="Executing migration" id="Add column preferences.json_data" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.218253576Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.114241ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.224793633Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 16:15:32 kafka | [2024-03-20 16:13:09,702] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 16:15:32 kafka | [2024-03-20 16:13:09,704] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:09,710] INFO Socket connection established, initiating session, client: /172.17.0.9:48694, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:09,718] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000402480001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 16:15:32 kafka | [2024-03-20 16:13:09,723] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 16:15:32 kafka | [2024-03-20 16:13:10,044] INFO Cluster ID = rOCUpPXHS92ZRRxFHWBetw (kafka.server.KafkaServer) 16:15:32 kafka | [2024-03-20 16:13:10,047] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 16:15:32 kafka | [2024-03-20 16:13:10,094] INFO KafkaConfig values: 16:15:32 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 16:15:32 kafka | alter.config.policy.class.name = null 16:15:32 kafka | alter.log.dirs.replication.quota.window.num = 11 16:15:32 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 16:15:32 kafka | authorizer.class.name = 16:15:32 kafka | auto.create.topics.enable = true 16:15:32 kafka | auto.include.jmx.reporter = true 16:15:32 kafka | auto.leader.rebalance.enable = true 16:15:32 kafka | background.threads = 10 16:15:32 kafka | broker.heartbeat.interval.ms = 2000 16:15:32 kafka | broker.id = 1 16:15:32 kafka | broker.id.generation.enable = true 16:15:32 kafka | broker.rack = null 16:15:32 kafka | broker.session.timeout.ms = 9000 16:15:32 kafka | client.quota.callback.class = null 16:15:32 kafka | compression.type = producer 16:15:32 kafka | connection.failed.authentication.delay.ms = 100 16:15:32 kafka | connections.max.idle.ms = 600000 16:15:32 kafka | connections.max.reauth.ms = 0 16:15:32 kafka | control.plane.listener.name = null 16:15:32 kafka | controlled.shutdown.enable = true 16:15:32 kafka | controlled.shutdown.max.retries = 3 16:15:32 kafka | controlled.shutdown.retry.backoff.ms = 5000 16:15:32 kafka | controller.listener.names = null 16:15:32 mariadb | 2024-03-20 16:12:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 16:15:32 mariadb | 2024-03-20 16:12:58+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 16:15:32 mariadb | 2024-03-20 16:12:58+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 16:15:32 mariadb | 2024-03-20 16:12:58+00:00 [Note] [Entrypoint]: Initializing database files 16:15:32 mariadb | 2024-03-20 16:12:58 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 16:15:32 mariadb | 2024-03-20 16:12:58 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 16:15:32 mariadb | 2024-03-20 16:12:58 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 16:15:32 mariadb | 16:15:32 mariadb | 16:15:32 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 16:15:32 mariadb | To do so, start the server, then issue the following command: 16:15:32 mariadb | 16:15:32 mariadb | '/usr/bin/mysql_secure_installation' 16:15:32 mariadb | 16:15:32 mariadb | which will also give you the option of removing the test 16:15:32 mariadb | databases and anonymous user created by default. This is 16:15:32 mariadb | strongly recommended for production servers. 16:15:32 mariadb | 16:15:32 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 16:15:32 mariadb | 16:15:32 mariadb | Please report any problems at https://mariadb.org/jira 16:15:32 mariadb | 16:15:32 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 16:15:32 mariadb | 16:15:32 mariadb | Consider joining MariaDB's strong and vibrant community: 16:15:32 mariadb | https://mariadb.org/get-involved/ 16:15:32 mariadb | 16:15:32 mariadb | 2024-03-20 16:12:59+00:00 [Note] [Entrypoint]: Database files initialized 16:15:32 mariadb | 2024-03-20 16:12:59+00:00 [Note] [Entrypoint]: Starting temporary server 16:15:32 mariadb | 2024-03-20 16:12:59+00:00 [Note] [Entrypoint]: Waiting for server startup 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: Number of transaction pools: 1 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 16:15:32 kafka | controller.quorum.append.linger.ms = 25 16:15:32 kafka | controller.quorum.election.backoff.max.ms = 1000 16:15:32 kafka | controller.quorum.election.timeout.ms = 1000 16:15:32 kafka | controller.quorum.fetch.timeout.ms = 2000 16:15:32 kafka | controller.quorum.request.timeout.ms = 2000 16:15:32 kafka | controller.quorum.retry.backoff.ms = 20 16:15:32 kafka | controller.quorum.voters = [] 16:15:32 kafka | controller.quota.window.num = 11 16:15:32 kafka | controller.quota.window.size.seconds = 1 16:15:32 kafka | controller.socket.timeout.ms = 30000 16:15:32 kafka | create.topic.policy.class.name = null 16:15:32 kafka | default.replication.factor = 1 16:15:32 kafka | delegation.token.expiry.check.interval.ms = 3600000 16:15:32 kafka | delegation.token.expiry.time.ms = 86400000 16:15:32 kafka | delegation.token.master.key = null 16:15:32 kafka | delegation.token.max.lifetime.ms = 604800000 16:15:32 kafka | delegation.token.secret.key = null 16:15:32 kafka | delete.records.purgatory.purge.interval.requests = 1 16:15:32 kafka | delete.topic.enable = true 16:15:32 kafka | early.start.listeners = null 16:15:32 kafka | fetch.max.bytes = 57671680 16:15:32 kafka | fetch.purgatory.purge.interval.requests = 1000 16:15:32 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 16:15:32 kafka | group.consumer.heartbeat.interval.ms = 5000 16:15:32 kafka | group.consumer.max.heartbeat.interval.ms = 15000 16:15:32 kafka | group.consumer.max.session.timeout.ms = 60000 16:15:32 kafka | group.consumer.max.size = 2147483647 16:15:32 kafka | group.consumer.min.heartbeat.interval.ms = 5000 16:15:32 kafka | group.consumer.min.session.timeout.ms = 45000 16:15:32 kafka | group.consumer.session.timeout.ms = 45000 16:15:32 kafka | group.coordinator.new.enable = false 16:15:32 kafka | group.coordinator.threads = 1 16:15:32 kafka | group.initial.rebalance.delay.ms = 3000 16:15:32 kafka | group.max.session.timeout.ms = 1800000 16:15:32 kafka | group.max.size = 2147483647 16:15:32 kafka | group.min.session.timeout.ms = 6000 16:15:32 kafka | initial.broker.registration.timeout.ms = 60000 16:15:32 kafka | inter.broker.listener.name = PLAINTEXT 16:15:32 kafka | inter.broker.protocol.version = 3.6-IV2 16:15:32 kafka | kafka.metrics.polling.interval.secs = 10 16:15:32 kafka | kafka.metrics.reporters = [] 16:15:32 kafka | leader.imbalance.check.interval.seconds = 300 16:15:32 kafka | leader.imbalance.per.broker.percentage = 10 16:15:32 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 16:15:32 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 16:15:32 kafka | log.cleaner.backoff.ms = 15000 16:15:32 kafka | log.cleaner.dedupe.buffer.size = 134217728 16:15:32 kafka | log.cleaner.delete.retention.ms = 86400000 16:15:32 kafka | log.cleaner.enable = true 16:15:32 kafka | log.cleaner.io.buffer.load.factor = 0.9 16:15:32 kafka | log.cleaner.io.buffer.size = 524288 16:15:32 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 16:15:32 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 16:15:32 kafka | log.cleaner.min.cleanable.ratio = 0.5 16:15:32 kafka | log.cleaner.min.compaction.lag.ms = 0 16:15:32 kafka | log.cleaner.threads = 1 16:15:32 kafka | log.cleanup.policy = [delete] 16:15:32 kafka | log.dir = /tmp/kafka-logs 16:15:32 kafka | log.dirs = /var/lib/kafka/data 16:15:32 kafka | log.flush.interval.messages = 9223372036854775807 16:15:32 kafka | log.flush.interval.ms = null 16:15:32 kafka | log.flush.offset.checkpoint.interval.ms = 60000 16:15:32 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 16:15:32 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 16:15:32 kafka | log.index.interval.bytes = 4096 16:15:32 kafka | log.index.size.max.bytes = 10485760 16:15:32 kafka | log.local.retention.bytes = -2 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: Completed initialization of buffer pool 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: 128 rollback segments are active. 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] InnoDB: log sequence number 46590; transaction id 14 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] Plugin 'FEEDBACK' is disabled. 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 16:15:32 mariadb | 2024-03-20 16:12:59 0 [Note] mariadbd: ready for connections. 16:15:32 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 16:15:32 mariadb | 2024-03-20 16:13:00+00:00 [Note] [Entrypoint]: Temporary server started. 16:15:32 mariadb | 2024-03-20 16:13:02+00:00 [Note] [Entrypoint]: Creating user policy_user 16:15:32 mariadb | 2024-03-20 16:13:02+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 16:15:32 mariadb | 16:15:32 mariadb | 2024-03-20 16:13:02+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 16:15:32 mariadb | 16:15:32 mariadb | 2024-03-20 16:13:02+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 16:15:32 mariadb | #!/bin/bash -xv 16:15:32 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 16:15:32 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 16:15:32 mariadb | # 16:15:32 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 16:15:32 mariadb | # you may not use this file except in compliance with the License. 16:15:32 mariadb | # You may obtain a copy of the License at 16:15:32 mariadb | # 16:15:32 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 16:15:32 mariadb | # 16:15:32 mariadb | # Unless required by applicable law or agreed to in writing, software 16:15:32 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 16:15:32 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16:15:32 mariadb | # See the License for the specific language governing permissions and 16:15:32 mariadb | # limitations under the License. 16:15:32 mariadb | 16:15:32 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:15:32 mariadb | do 16:15:32 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 16:15:32 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 16:15:32 mariadb | done 16:15:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:15:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 16:15:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:15:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:15:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 16:15:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:15:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:15:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 16:15:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:15:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:15:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 16:15:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:15:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:15:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 16:15:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:15:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 16:15:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 16:15:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 16:15:32 mariadb | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 16:15:32 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 16:15:32 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 16:15:32 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 16:15:32 mariadb | 16:15:32 mariadb | 2024-03-20 16:13:03+00:00 [Note] [Entrypoint]: Stopping temporary server 16:15:32 mariadb | 2024-03-20 16:13:03 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 16:15:32 mariadb | 2024-03-20 16:13:03 0 [Note] InnoDB: FTS optimize thread exiting. 16:15:32 mariadb | 2024-03-20 16:13:03 0 [Note] InnoDB: Starting shutdown... 16:15:32 mariadb | 2024-03-20 16:13:03 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 16:15:32 mariadb | 2024-03-20 16:13:03 0 [Note] InnoDB: Buffer pool(s) dump completed at 240320 16:13:03 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Shutdown completed; log sequence number 329603; transaction id 298 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] mariadbd: Shutdown complete 16:15:32 mariadb | 16:15:32 mariadb | 2024-03-20 16:13:04+00:00 [Note] [Entrypoint]: Temporary server stopped 16:15:32 mariadb | 16:15:32 mariadb | 2024-03-20 16:13:04+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 16:15:32 mariadb | 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Number of transaction pools: 1 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Completed initialization of buffer pool 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: 128 rollback segments are active. 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: log sequence number 329603; transaction id 299 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] Plugin 'FEEDBACK' is disabled. 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] Server socket created on IP: '0.0.0.0'. 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] Server socket created on IP: '::'. 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] mariadbd: ready for connections. 16:15:32 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 16:15:32 mariadb | 2024-03-20 16:13:04 0 [Note] InnoDB: Buffer pool(s) load completed at 240320 16:13:04 16:15:32 mariadb | 2024-03-20 16:13:04 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 16:15:32 mariadb | 2024-03-20 16:13:04 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 16:15:32 mariadb | 2024-03-20 16:13:05 17 [Warning] Aborted connection 17 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 16:15:32 mariadb | 2024-03-20 16:13:06 63 [Warning] Aborted connection 63 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0450-pdpgroup.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0470-pdp.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0570-toscadatatype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0630-toscanodetype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0660-toscaparameter.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 kafka | log.local.retention.ms = -2 16:15:32 kafka | log.message.downconversion.enable = true 16:15:32 kafka | log.message.format.version = 3.0-IV1 16:15:32 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 16:15:32 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 16:15:32 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 16:15:32 kafka | log.message.timestamp.type = CreateTime 16:15:32 kafka | log.preallocate = false 16:15:32 kafka | log.retention.bytes = -1 16:15:32 kafka | log.retention.check.interval.ms = 300000 16:15:32 kafka | log.retention.hours = 168 16:15:32 kafka | log.retention.minutes = null 16:15:32 kafka | log.retention.ms = null 16:15:32 kafka | log.roll.hours = 168 16:15:32 kafka | log.roll.jitter.hours = 0 16:15:32 kafka | log.roll.jitter.ms = null 16:15:32 kafka | log.roll.ms = null 16:15:32 kafka | log.segment.bytes = 1073741824 16:15:32 kafka | log.segment.delete.delay.ms = 60000 16:15:32 kafka | max.connection.creation.rate = 2147483647 16:15:32 kafka | max.connections = 2147483647 16:15:32 kafka | max.connections.per.ip = 2147483647 16:15:32 kafka | max.connections.per.ip.overrides = 16:15:32 kafka | max.incremental.fetch.session.cache.slots = 1000 16:15:32 kafka | message.max.bytes = 1048588 16:15:32 kafka | metadata.log.dir = null 16:15:32 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 16:15:32 kafka | metadata.log.max.snapshot.interval.ms = 3600000 16:15:32 kafka | metadata.log.segment.bytes = 1073741824 16:15:32 kafka | metadata.log.segment.min.bytes = 8388608 16:15:32 kafka | metadata.log.segment.ms = 604800000 16:15:32 kafka | metadata.max.idle.interval.ms = 500 16:15:32 kafka | metadata.max.retention.bytes = 104857600 16:15:32 kafka | metadata.max.retention.ms = 604800000 16:15:32 kafka | metric.reporters = [] 16:15:32 kafka | metrics.num.samples = 2 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0670-toscapolicies.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0690-toscapolicy.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0730-toscaproperty.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0770-toscarequirement.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0780-toscarequirements.sql 16:15:32 kafka | metrics.recording.level = INFO 16:15:32 kafka | metrics.sample.window.ms = 30000 16:15:32 kafka | min.insync.replicas = 1 16:15:32 kafka | node.id = 1 16:15:32 kafka | num.io.threads = 8 16:15:32 kafka | num.network.threads = 3 16:15:32 kafka | num.partitions = 1 16:15:32 kafka | num.recovery.threads.per.data.dir = 1 16:15:32 kafka | num.replica.alter.log.dirs.threads = null 16:15:32 kafka | num.replica.fetchers = 1 16:15:32 kafka | offset.metadata.max.bytes = 4096 16:15:32 kafka | offsets.commit.required.acks = -1 16:15:32 kafka | offsets.commit.timeout.ms = 5000 16:15:32 kafka | offsets.load.buffer.size = 5242880 16:15:32 kafka | offsets.retention.check.interval.ms = 600000 16:15:32 kafka | offsets.retention.minutes = 10080 16:15:32 kafka | offsets.topic.compression.codec = 0 16:15:32 kafka | offsets.topic.num.partitions = 50 16:15:32 kafka | offsets.topic.replication.factor = 1 16:15:32 kafka | offsets.topic.segment.bytes = 104857600 16:15:32 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 16:15:32 kafka | password.encoder.iterations = 4096 16:15:32 kafka | password.encoder.key.length = 128 16:15:32 kafka | password.encoder.keyfactory.algorithm = null 16:15:32 kafka | password.encoder.old.secret = null 16:15:32 kafka | password.encoder.secret = null 16:15:32 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 16:15:32 kafka | process.roles = [] 16:15:32 kafka | producer.id.expiration.check.interval.ms = 600000 16:15:32 kafka | producer.id.expiration.ms = 86400000 16:15:32 kafka | producer.purgatory.purge.interval.requests = 1000 16:15:32 kafka | queued.max.request.bytes = -1 16:15:32 kafka | queued.max.requests = 500 16:15:32 kafka | quota.window.num = 11 16:15:32 kafka | quota.window.size.seconds = 1 16:15:32 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 16:15:32 kafka | remote.log.manager.task.interval.ms = 30000 16:15:32 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 16:15:32 kafka | remote.log.manager.task.retry.backoff.ms = 500 16:15:32 kafka | remote.log.manager.task.retry.jitter = 0.2 16:15:32 kafka | remote.log.manager.thread.pool.size = 10 16:15:32 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 16:15:32 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 16:15:32 kafka | remote.log.metadata.manager.class.path = null 16:15:32 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 16:15:32 kafka | remote.log.metadata.manager.listener.name = null 16:15:32 kafka | remote.log.reader.max.pending.tasks = 100 16:15:32 kafka | remote.log.reader.threads = 10 16:15:32 kafka | remote.log.storage.manager.class.name = null 16:15:32 kafka | remote.log.storage.manager.class.path = null 16:15:32 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 16:15:32 kafka | remote.log.storage.system.enable = false 16:15:32 kafka | replica.fetch.backoff.ms = 1000 16:15:32 kafka | replica.fetch.max.bytes = 1048576 16:15:32 kafka | replica.fetch.min.bytes = 1 16:15:32 kafka | replica.fetch.response.max.bytes = 10485760 16:15:32 kafka | replica.fetch.wait.max.ms = 500 16:15:32 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 16:15:32 kafka | replica.lag.time.max.ms = 30000 16:15:32 kafka | replica.selector.class = null 16:15:32 kafka | replica.socket.receive.buffer.bytes = 65536 16:15:32 kafka | replica.socket.timeout.ms = 30000 16:15:32 kafka | replication.quota.window.num = 11 16:15:32 kafka | replication.quota.window.size.seconds = 1 16:15:32 kafka | request.timeout.ms = 30000 16:15:32 kafka | reserved.broker.max.id = 1000 16:15:32 kafka | sasl.client.callback.handler.class = null 16:15:32 kafka | sasl.enabled.mechanisms = [GSSAPI] 16:15:32 kafka | sasl.jaas.config = null 16:15:32 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 kafka | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 16:15:32 kafka | sasl.kerberos.service.name = null 16:15:32 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 kafka | sasl.login.callback.handler.class = null 16:15:32 kafka | sasl.login.class = null 16:15:32 kafka | sasl.login.connect.timeout.ms = null 16:15:32 kafka | sasl.login.read.timeout.ms = null 16:15:32 kafka | sasl.login.refresh.buffer.seconds = 300 16:15:32 kafka | sasl.login.refresh.min.period.seconds = 60 16:15:32 kafka | sasl.login.refresh.window.factor = 0.8 16:15:32 kafka | sasl.login.refresh.window.jitter = 0.05 16:15:32 kafka | sasl.login.retry.backoff.max.ms = 10000 16:15:32 kafka | sasl.login.retry.backoff.ms = 100 16:15:32 kafka | sasl.mechanism.controller.protocol = GSSAPI 16:15:32 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 16:15:32 kafka | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 kafka | sasl.oauthbearer.expected.audience = null 16:15:32 kafka | sasl.oauthbearer.expected.issuer = null 16:15:32 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 kafka | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 kafka | sasl.oauthbearer.scope.claim.name = scope 16:15:32 kafka | sasl.oauthbearer.sub.claim.name = sub 16:15:32 kafka | sasl.oauthbearer.token.endpoint.url = null 16:15:32 kafka | sasl.server.callback.handler.class = null 16:15:32 kafka | sasl.server.max.receive.size = 524288 16:15:32 kafka | security.inter.broker.protocol = PLAINTEXT 16:15:32 kafka | security.providers = null 16:15:32 kafka | server.max.startup.time.ms = 9223372036854775807 16:15:32 kafka | socket.connection.setup.timeout.max.ms = 30000 16:15:32 kafka | socket.connection.setup.timeout.ms = 10000 16:15:32 kafka | socket.listen.backlog.size = 50 16:15:32 kafka | socket.receive.buffer.bytes = 102400 16:15:32 kafka | socket.request.max.bytes = 104857600 16:15:32 kafka | socket.send.buffer.bytes = 102400 16:15:32 kafka | ssl.cipher.suites = [] 16:15:32 kafka | ssl.client.auth = none 16:15:32 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 kafka | ssl.endpoint.identification.algorithm = https 16:15:32 kafka | ssl.engine.factory.class = null 16:15:32 kafka | ssl.key.password = null 16:15:32 kafka | ssl.keymanager.algorithm = SunX509 16:15:32 kafka | ssl.keystore.certificate.chain = null 16:15:32 kafka | ssl.keystore.key = null 16:15:32 kafka | ssl.keystore.location = null 16:15:32 kafka | ssl.keystore.password = null 16:15:32 kafka | ssl.keystore.type = JKS 16:15:32 kafka | ssl.principal.mapping.rules = DEFAULT 16:15:32 kafka | ssl.protocol = TLSv1.3 16:15:32 kafka | ssl.provider = null 16:15:32 kafka | ssl.secure.random.implementation = null 16:15:32 kafka | ssl.trustmanager.algorithm = PKIX 16:15:32 kafka | ssl.truststore.certificates = null 16:15:32 kafka | ssl.truststore.location = null 16:15:32 kafka | ssl.truststore.password = null 16:15:32 kafka | ssl.truststore.type = JKS 16:15:32 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 16:15:32 kafka | transaction.max.timeout.ms = 900000 16:15:32 kafka | transaction.partition.verification.enable = true 16:15:32 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 16:15:32 kafka | transaction.state.log.load.buffer.size = 5242880 16:15:32 kafka | transaction.state.log.min.isr = 2 16:15:32 kafka | transaction.state.log.num.partitions = 50 16:15:32 kafka | transaction.state.log.replication.factor = 3 16:15:32 kafka | transaction.state.log.segment.bytes = 104857600 16:15:32 kafka | transactional.id.expiration.ms = 604800000 16:15:32 kafka | unclean.leader.election.enable = false 16:15:32 kafka | unstable.api.versions.enable = false 16:15:32 kafka | zookeeper.clientCnxnSocket = null 16:15:32 kafka | zookeeper.connect = zookeeper:2181 16:15:32 kafka | zookeeper.connection.timeout.ms = null 16:15:32 kafka | zookeeper.max.in.flight.requests = 10 16:15:32 kafka | zookeeper.metadata.migration.enable = false 16:15:32 kafka | zookeeper.session.timeout.ms = 18000 16:15:32 kafka | zookeeper.set.acl = false 16:15:32 kafka | zookeeper.ssl.cipher.suites = null 16:15:32 kafka | zookeeper.ssl.client.enable = false 16:15:32 kafka | zookeeper.ssl.crl.enable = false 16:15:32 kafka | zookeeper.ssl.enabled.protocols = null 16:15:32 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 16:15:32 kafka | zookeeper.ssl.keystore.location = null 16:15:32 kafka | zookeeper.ssl.keystore.password = null 16:15:32 kafka | zookeeper.ssl.keystore.type = null 16:15:32 policy-db-migrator | > upgrade 0820-toscatrigger.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 kafka | zookeeper.ssl.ocsp.enable = false 16:15:32 kafka | zookeeper.ssl.protocol = TLSv1.2 16:15:32 kafka | zookeeper.ssl.truststore.location = null 16:15:32 kafka | zookeeper.ssl.truststore.password = null 16:15:32 kafka | zookeeper.ssl.truststore.type = null 16:15:32 kafka | (kafka.server.KafkaConfig) 16:15:32 kafka | [2024-03-20 16:13:10,122] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:15:32 kafka | [2024-03-20 16:13:10,123] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:15:32 kafka | [2024-03-20 16:13:10,124] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:15:32 kafka | [2024-03-20 16:13:10,128] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 16:15:32 kafka | [2024-03-20 16:13:10,158] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:10,162] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:10,172] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:10,173] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:10,174] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:10,185] INFO Starting the log cleaner (kafka.log.LogCleaner) 16:15:32 kafka | [2024-03-20 16:13:10,228] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 16:15:32 kafka | [2024-03-20 16:13:10,264] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 16:15:32 kafka | [2024-03-20 16:13:10,276] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 16:15:32 kafka | [2024-03-20 16:13:10,301] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 16:15:32 kafka | [2024-03-20 16:13:10,600] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 16:15:32 kafka | [2024-03-20 16:13:10,618] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 16:15:32 kafka | [2024-03-20 16:13:10,618] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 16:15:32 kafka | [2024-03-20 16:13:10,624] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 16:15:32 kafka | [2024-03-20 16:13:10,628] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 16:15:32 kafka | [2024-03-20 16:13:10,649] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,651] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,656] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,656] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,658] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,670] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 16:15:32 kafka | [2024-03-20 16:13:10,673] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 16:15:32 kafka | [2024-03-20 16:13:10,694] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 16:15:32 kafka | [2024-03-20 16:13:10,725] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1710951190715,1710951190715,1,0,0,72057611256070145,258,0,27 16:15:32 kafka | (kafka.zk.KafkaZkClient) 16:15:32 kafka | [2024-03-20 16:13:10,726] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 16:15:32 kafka | [2024-03-20 16:13:10,814] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 16:15:32 kafka | [2024-03-20 16:13:10,821] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,827] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,828] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,839] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 16:15:32 kafka | [2024-03-20 16:13:10,842] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:10,849] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,852] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:10,857] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,861] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 16:15:32 kafka | [2024-03-20 16:13:10,874] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 16:15:32 kafka | [2024-03-20 16:13:10,877] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 16:15:32 kafka | [2024-03-20 16:13:10,879] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 16:15:32 kafka | [2024-03-20 16:13:10,891] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 16:15:32 kafka | [2024-03-20 16:13:10,891] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,898] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,904] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,909] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,912] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 16:15:32 kafka | [2024-03-20 16:13:10,923] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,929] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,934] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 16:15:32 kafka | [2024-03-20 16:13:10,938] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 16:15:32 kafka | [2024-03-20 16:13:10,943] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 16:15:32 policy-api | Waiting for mariadb port 3306... 16:15:32 policy-api | mariadb (172.17.0.3:3306) open 16:15:32 policy-api | Waiting for policy-db-migrator port 6824... 16:15:32 policy-api | policy-db-migrator (172.17.0.7:6824) open 16:15:32 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 16:15:32 policy-api | 16:15:32 policy-api | . ____ _ __ _ _ 16:15:32 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 16:15:32 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 16:15:32 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 16:15:32 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 16:15:32 policy-api | =========|_|==============|___/=/_/_/_/ 16:15:32 policy-api | :: Spring Boot :: (v3.1.8) 16:15:32 policy-api | 16:15:32 policy-api | [2024-03-20T16:13:13.234+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 16:15:32 policy-api | [2024-03-20T16:13:13.236+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 16:15:32 policy-api | [2024-03-20T16:13:14.960+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 16:15:32 policy-api | [2024-03-20T16:13:15.051+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 82 ms. Found 6 JPA repository interfaces. 16:15:32 policy-api | [2024-03-20T16:13:15.470+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 16:15:32 policy-api | [2024-03-20T16:13:15.471+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 16:15:32 policy-api | [2024-03-20T16:13:16.121+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 16:15:32 policy-api | [2024-03-20T16:13:16.131+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 16:15:32 policy-api | [2024-03-20T16:13:16.133+00:00|INFO|StandardService|main] Starting service [Tomcat] 16:15:32 policy-api | [2024-03-20T16:13:16.133+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 16:15:32 policy-api | [2024-03-20T16:13:16.224+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 16:15:32 policy-api | [2024-03-20T16:13:16.224+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2909 ms 16:15:32 policy-api | [2024-03-20T16:13:16.711+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 16:15:32 policy-api | [2024-03-20T16:13:16.794+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.224860454Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=66.201µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.230496957Z level=info msg="Executing migration" id="Add preferences index org_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.231474922Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=974.665µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.236156161Z level=info msg="Executing migration" id="Add preferences index user_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.236985753Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=829.232µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.239993468Z level=info msg="Executing migration" id="create alert table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.241035863Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.043955ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.244274701Z level=info msg="Executing migration" id="add index alert org_id & id " 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.245120653Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=845.562µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.249719501Z level=info msg="Executing migration" id="add index alert state" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.250507833Z level=info msg="Migration successfully executed" id="add index alert state" duration=788.052µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.25367542Z level=info msg="Executing migration" id="add index alert dashboard_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.254459441Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=783.751µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.274752551Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.275959539Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.207168ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.280462736Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.281839776Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.3769ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.285547171Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.286089269Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=542.458µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.289495259Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.297872763Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=8.374744ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.302937598Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.303435215Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=495.747µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.306375059Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.306951927Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=576.408µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.310930176Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.311432484Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=501.918µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.355628957Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.357119829Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.491332ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.36936161Z level=info msg="Executing migration" id="create alert_notification table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.370614678Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.250518ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.375817995Z level=info msg="Executing migration" id="Add column is_default" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.379291567Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.473312ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.478964186Z level=info msg="Executing migration" id="Add column frequency" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.484303585Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.337829ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.489569633Z level=info msg="Executing migration" id="Add column send_reminder" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.493216388Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.647115ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.4973987Z level=info msg="Executing migration" id="Add column disable_resolve_message" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.500981983Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.582963ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.505065563Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.505901936Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=832.693µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.511297256Z level=info msg="Executing migration" id="Update alert table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.511321777Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=25.681µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.514533174Z level=info msg="Executing migration" id="Update alert_notification table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.514559555Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=27.701µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.517884734Z level=info msg="Executing migration" id="create notification_journal table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.519011431Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.126017ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.523431216Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.524860687Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.428421ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.558868133Z level=info msg="Executing migration" id="drop alert_notification_journal" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.559975059Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.106696ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.564125321Z level=info msg="Executing migration" id="create alert_notification_state table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.565354349Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.228318ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.581662731Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.583144694Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.484772ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.589460157Z level=info msg="Executing migration" id="Add for to alert table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.593897123Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.435916ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.599971323Z level=info msg="Executing migration" id="Add column uid in alert_notification" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.604051584Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.082001ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.608194505Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.60848604Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=299.025µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.611786579Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.613145889Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.35915ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.619618135Z level=info msg="Executing migration" id="Remove unique index org_id_name" 16:15:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0100-pdp.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 16:15:32 policy-db-migrator | JOIN pdpstatistics b 16:15:32 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 16:15:32 policy-db-migrator | SET a.id = b.id 16:15:32 prometheus | ts=2024-03-20T16:12:57.051Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 16:15:32 prometheus | ts=2024-03-20T16:12:57.051Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 16:15:32 prometheus | ts=2024-03-20T16:12:57.051Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 16:15:32 prometheus | ts=2024-03-20T16:12:57.051Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 16:15:32 prometheus | ts=2024-03-20T16:12:57.051Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 16:15:32 prometheus | ts=2024-03-20T16:12:57.051Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 16:15:32 prometheus | ts=2024-03-20T16:12:57.058Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 16:15:32 prometheus | ts=2024-03-20T16:12:57.058Z caller=main.go:1129 level=info msg="Starting TSDB ..." 16:15:32 prometheus | ts=2024-03-20T16:12:57.061Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 16:15:32 prometheus | ts=2024-03-20T16:12:57.061Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 16:15:32 prometheus | ts=2024-03-20T16:12:57.062Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 16:15:32 prometheus | ts=2024-03-20T16:12:57.062Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.46µs 16:15:32 prometheus | ts=2024-03-20T16:12:57.062Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 16:15:32 prometheus | ts=2024-03-20T16:12:57.064Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 16:15:32 prometheus | ts=2024-03-20T16:12:57.064Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=30.96µs wal_replay_duration=1.168477ms wbl_replay_duration=180ns total_replay_duration=1.225307ms 16:15:32 prometheus | ts=2024-03-20T16:12:57.067Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 16:15:32 prometheus | ts=2024-03-20T16:12:57.067Z caller=main.go:1153 level=info msg="TSDB started" 16:15:32 prometheus | ts=2024-03-20T16:12:57.067Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 16:15:32 prometheus | ts=2024-03-20T16:12:57.068Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=872.852µs db_storage=1.38µs remote_storage=1.84µs web_handler=290ns query_engine=1.46µs scrape=260.854µs scrape_sd=105.492µs notify=29.19µs notify_sd=9.79µs rules=1.7µs tracing=7.13µs 16:15:32 prometheus | ts=2024-03-20T16:12:57.068Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 16:15:32 prometheus | ts=2024-03-20T16:12:57.068Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 16:15:32 kafka | [2024-03-20 16:13:10,944] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,944] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,945] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,945] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,954] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 16:15:32 kafka | [2024-03-20 16:13:10,954] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,955] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,956] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,957] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 16:15:32 kafka | [2024-03-20 16:13:10,957] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 16:15:32 kafka | [2024-03-20 16:13:10,958] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:10,958] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 16:15:32 kafka | [2024-03-20 16:13:10,961] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:10,971] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 16:15:32 kafka | [2024-03-20 16:13:10,971] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 16:15:32 kafka | [2024-03-20 16:13:10,971] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 16:15:32 kafka | [2024-03-20 16:13:10,971] INFO Kafka startTimeMs: 1710951190965 (org.apache.kafka.common.utils.AppInfoParser) 16:15:32 kafka | [2024-03-20 16:13:10,972] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 16:15:32 kafka | [2024-03-20 16:13:10,975] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 16:15:32 kafka | [2024-03-20 16:13:10,977] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 16:15:32 kafka | [2024-03-20 16:13:10,984] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 16:15:32 kafka | [2024-03-20 16:13:10,987] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 16:15:32 kafka | [2024-03-20 16:13:10,989] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 16:15:32 kafka | [2024-03-20 16:13:10,990] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 16:15:32 kafka | [2024-03-20 16:13:11,000] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 16:15:32 kafka | [2024-03-20 16:13:11,001] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:11,006] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:11,006] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:11,007] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:11,007] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:11,008] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:11,040] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:11,072] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:11,126] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 16:15:32 kafka | [2024-03-20 16:13:11,133] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 16:15:32 kafka | [2024-03-20 16:13:16,042] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:16,042] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:36,787] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 16:15:32 kafka | [2024-03-20 16:13:36,789] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 16:15:32 kafka | [2024-03-20 16:13:36,811] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:36,820] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:36,868] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(NJ0Ig4IURHCsgbfyjcgUCg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(QvJGL4ltS_qYQrjK6IZK9A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:36,870] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 16:15:32 kafka | [2024-03-20 16:13:36,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,876] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,877] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 policy-pap | Waiting for mariadb port 3306... 16:15:32 policy-pap | mariadb (172.17.0.3:3306) open 16:15:32 policy-pap | Waiting for kafka port 9092... 16:15:32 policy-pap | kafka (172.17.0.9:9092) open 16:15:32 policy-pap | Waiting for api port 6969... 16:15:32 policy-pap | api (172.17.0.8:6969) open 16:15:32 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 16:15:32 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 16:15:32 policy-pap | 16:15:32 policy-pap | . ____ _ __ _ _ 16:15:32 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 16:15:32 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 16:15:32 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 16:15:32 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 16:15:32 policy-pap | =========|_|==============|___/=/_/_/_/ 16:15:32 policy-pap | :: Spring Boot :: (v3.1.8) 16:15:32 policy-pap | 16:15:32 policy-pap | [2024-03-20T16:13:26.594+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 31 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 16:15:32 policy-pap | [2024-03-20T16:13:26.596+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 16:15:32 policy-pap | [2024-03-20T16:13:28.424+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 16:15:32 policy-pap | [2024-03-20T16:13:28.551+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 117 ms. Found 7 JPA repository interfaces. 16:15:32 policy-pap | [2024-03-20T16:13:28.973+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 16:15:32 policy-pap | [2024-03-20T16:13:28.973+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 16:15:32 policy-pap | [2024-03-20T16:13:29.675+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 16:15:32 policy-pap | [2024-03-20T16:13:29.685+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 16:15:32 policy-pap | [2024-03-20T16:13:29.687+00:00|INFO|StandardService|main] Starting service [Tomcat] 16:15:32 policy-pap | [2024-03-20T16:13:29.687+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 16:15:32 policy-pap | [2024-03-20T16:13:29.800+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 16:15:32 policy-pap | [2024-03-20T16:13:29.800+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3123 ms 16:15:32 policy-pap | [2024-03-20T16:13:30.234+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 16:15:32 policy-pap | [2024-03-20T16:13:30.317+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 16:15:32 policy-pap | [2024-03-20T16:13:30.320+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 16:15:32 policy-pap | [2024-03-20T16:13:30.369+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 16:15:32 policy-pap | [2024-03-20T16:13:30.737+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 16:15:32 policy-pap | [2024-03-20T16:13:30.757+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 16:15:32 policy-apex-pdp | Waiting for mariadb port 3306... 16:15:32 policy-apex-pdp | mariadb (172.17.0.3:3306) open 16:15:32 policy-apex-pdp | Waiting for kafka port 9092... 16:15:32 policy-apex-pdp | kafka (172.17.0.9:9092) open 16:15:32 policy-apex-pdp | Waiting for pap port 6969... 16:15:32 policy-apex-pdp | pap (172.17.0.10:6969) open 16:15:32 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 16:15:32 policy-apex-pdp | [2024-03-20T16:13:36.990+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.201+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:15:32 policy-apex-pdp | allow.auto.create.topics = true 16:15:32 policy-apex-pdp | auto.commit.interval.ms = 5000 16:15:32 policy-apex-pdp | auto.include.jmx.reporter = true 16:15:32 policy-apex-pdp | auto.offset.reset = latest 16:15:32 policy-apex-pdp | bootstrap.servers = [kafka:9092] 16:15:32 policy-apex-pdp | check.crcs = true 16:15:32 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 16:15:32 policy-apex-pdp | client.id = consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-1 16:15:32 policy-apex-pdp | client.rack = 16:15:32 policy-apex-pdp | connections.max.idle.ms = 540000 16:15:32 policy-apex-pdp | default.api.timeout.ms = 60000 16:15:32 policy-apex-pdp | enable.auto.commit = true 16:15:32 policy-apex-pdp | exclude.internal.topics = true 16:15:32 policy-apex-pdp | fetch.max.bytes = 52428800 16:15:32 policy-apex-pdp | fetch.max.wait.ms = 500 16:15:32 policy-apex-pdp | fetch.min.bytes = 1 16:15:32 policy-apex-pdp | group.id = 2f5e0a58-910b-431c-bb29-e00354420c7f 16:15:32 policy-apex-pdp | group.instance.id = null 16:15:32 policy-apex-pdp | heartbeat.interval.ms = 3000 16:15:32 policy-apex-pdp | interceptor.classes = [] 16:15:32 policy-apex-pdp | internal.leave.group.on.close = true 16:15:32 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 16:15:32 policy-apex-pdp | isolation.level = read_uncommitted 16:15:32 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-apex-pdp | max.partition.fetch.bytes = 1048576 16:15:32 policy-apex-pdp | max.poll.interval.ms = 300000 16:15:32 policy-api | [2024-03-20T16:13:16.798+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 16:15:32 policy-api | [2024-03-20T16:13:16.855+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 16:15:32 policy-api | [2024-03-20T16:13:17.277+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 16:15:32 policy-api | [2024-03-20T16:13:17.315+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 16:15:32 policy-api | [2024-03-20T16:13:17.436+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 16:15:32 policy-api | [2024-03-20T16:13:17.438+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 16:15:32 policy-api | [2024-03-20T16:13:19.293+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 16:15:32 policy-api | [2024-03-20T16:13:19.296+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 16:15:32 policy-api | [2024-03-20T16:13:20.345+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 16:15:32 policy-api | [2024-03-20T16:13:21.231+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 16:15:32 policy-api | [2024-03-20T16:13:22.338+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 16:15:32 policy-api | [2024-03-20T16:13:22.527+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@58a01e47, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6149184e, org.springframework.security.web.context.SecurityContextHolderFilter@234a08ea, org.springframework.security.web.header.HeaderWriterFilter@2e26841f, org.springframework.security.web.authentication.logout.LogoutFilter@c7a7d3, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3413effc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56d3e4a9, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2542d320, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6f3a8d5e, org.springframework.security.web.access.ExceptionTranslationFilter@19bd1f98, org.springframework.security.web.access.intercept.AuthorizationFilter@729f8c5d] 16:15:32 policy-api | [2024-03-20T16:13:23.406+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 16:15:32 policy-api | [2024-03-20T16:13:23.512+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 16:15:32 policy-api | [2024-03-20T16:13:23.544+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 16:15:32 policy-api | [2024-03-20T16:13:23.567+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.122 seconds (process running for 11.707) 16:15:32 policy-api | [2024-03-20T16:13:39.730+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 16:15:32 policy-api | [2024-03-20T16:13:39.731+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 16:15:32 policy-api | [2024-03-20T16:13:39.732+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 16:15:32 policy-api | [2024-03-20T16:13:39.998+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 16:15:32 policy-api | [] 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0210-sequence.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0220-sequence.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0120-toscatrigger.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0140-toscaparameter.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0150-toscaproperty.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 16:15:32 policy-pap | [2024-03-20T16:13:30.867+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a 16:15:32 policy-pap | [2024-03-20T16:13:30.869+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 16:15:32 policy-pap | [2024-03-20T16:13:32.936+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 16:15:32 policy-pap | [2024-03-20T16:13:32.940+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 16:15:32 policy-pap | [2024-03-20T16:13:33.488+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 16:15:32 policy-pap | [2024-03-20T16:13:33.884+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 16:15:32 policy-pap | [2024-03-20T16:13:33.981+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 16:15:32 policy-pap | [2024-03-20T16:13:34.296+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:15:32 policy-pap | allow.auto.create.topics = true 16:15:32 policy-pap | auto.commit.interval.ms = 5000 16:15:32 policy-pap | auto.include.jmx.reporter = true 16:15:32 policy-pap | auto.offset.reset = latest 16:15:32 policy-pap | bootstrap.servers = [kafka:9092] 16:15:32 policy-pap | check.crcs = true 16:15:32 policy-pap | client.dns.lookup = use_all_dns_ips 16:15:32 policy-pap | client.id = consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-1 16:15:32 policy-pap | client.rack = 16:15:32 policy-pap | connections.max.idle.ms = 540000 16:15:32 policy-pap | default.api.timeout.ms = 60000 16:15:32 policy-pap | enable.auto.commit = true 16:15:32 policy-pap | exclude.internal.topics = true 16:15:32 policy-pap | fetch.max.bytes = 52428800 16:15:32 policy-pap | fetch.max.wait.ms = 500 16:15:32 policy-pap | fetch.min.bytes = 1 16:15:32 policy-pap | group.id = cd3571d2-bf35-4e38-b6c0-741ea8425298 16:15:32 policy-pap | group.instance.id = null 16:15:32 policy-pap | heartbeat.interval.ms = 3000 16:15:32 policy-pap | interceptor.classes = [] 16:15:32 policy-pap | internal.leave.group.on.close = true 16:15:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:15:32 policy-pap | isolation.level = read_uncommitted 16:15:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | max.partition.fetch.bytes = 1048576 16:15:32 policy-pap | max.poll.interval.ms = 300000 16:15:32 policy-pap | max.poll.records = 500 16:15:32 policy-pap | metadata.max.age.ms = 300000 16:15:32 policy-pap | metric.reporters = [] 16:15:32 policy-pap | metrics.num.samples = 2 16:15:32 policy-pap | metrics.recording.level = INFO 16:15:32 policy-apex-pdp | max.poll.records = 500 16:15:32 policy-apex-pdp | metadata.max.age.ms = 300000 16:15:32 policy-apex-pdp | metric.reporters = [] 16:15:32 policy-apex-pdp | metrics.num.samples = 2 16:15:32 policy-apex-pdp | metrics.recording.level = INFO 16:15:32 policy-apex-pdp | metrics.sample.window.ms = 30000 16:15:32 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:15:32 policy-apex-pdp | receive.buffer.bytes = 65536 16:15:32 policy-apex-pdp | reconnect.backoff.max.ms = 1000 16:15:32 policy-apex-pdp | reconnect.backoff.ms = 50 16:15:32 policy-apex-pdp | request.timeout.ms = 30000 16:15:32 policy-apex-pdp | retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.client.callback.handler.class = null 16:15:32 policy-apex-pdp | sasl.jaas.config = null 16:15:32 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-apex-pdp | sasl.kerberos.service.name = null 16:15:32 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 policy-apex-pdp | sasl.login.callback.handler.class = null 16:15:32 policy-apex-pdp | sasl.login.class = null 16:15:32 policy-apex-pdp | sasl.login.connect.timeout.ms = null 16:15:32 policy-apex-pdp | sasl.login.read.timeout.ms = null 16:15:32 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 16:15:32 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.mechanism = GSSAPI 16:15:32 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 16:15:32 policy-pap | metrics.sample.window.ms = 30000 16:15:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:15:32 policy-pap | receive.buffer.bytes = 65536 16:15:32 policy-pap | reconnect.backoff.max.ms = 1000 16:15:32 policy-pap | reconnect.backoff.ms = 50 16:15:32 policy-pap | request.timeout.ms = 30000 16:15:32 policy-pap | retry.backoff.ms = 100 16:15:32 policy-pap | sasl.client.callback.handler.class = null 16:15:32 policy-pap | sasl.jaas.config = null 16:15:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-pap | sasl.kerberos.service.name = null 16:15:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 policy-pap | sasl.login.callback.handler.class = null 16:15:32 policy-pap | sasl.login.class = null 16:15:32 policy-pap | sasl.login.connect.timeout.ms = null 16:15:32 policy-pap | sasl.login.read.timeout.ms = null 16:15:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:15:32 policy-pap | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.login.retry.backoff.ms = 100 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0100-upgrade.sql 16:15:32 policy-pap | sasl.mechanism = GSSAPI 16:15:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-pap | sasl.oauthbearer.expected.audience = null 16:15:32 policy-pap | sasl.oauthbearer.expected.issuer = null 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:15:32 policy-pap | security.protocol = PLAINTEXT 16:15:32 policy-pap | security.providers = null 16:15:32 policy-pap | send.buffer.bytes = 131072 16:15:32 policy-pap | session.timeout.ms = 45000 16:15:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-pap | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-pap | ssl.cipher.suites = null 16:15:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-pap | ssl.endpoint.identification.algorithm = https 16:15:32 policy-pap | ssl.engine.factory.class = null 16:15:32 policy-pap | ssl.key.password = null 16:15:32 policy-pap | ssl.keymanager.algorithm = SunX509 16:15:32 policy-pap | ssl.keystore.certificate.chain = null 16:15:32 policy-pap | ssl.keystore.key = null 16:15:32 policy-pap | ssl.keystore.location = null 16:15:32 policy-pap | ssl.keystore.password = null 16:15:32 policy-pap | ssl.keystore.type = JKS 16:15:32 policy-pap | ssl.protocol = TLSv1.3 16:15:32 policy-pap | ssl.provider = null 16:15:32 policy-pap | ssl.secure.random.implementation = null 16:15:32 policy-pap | ssl.trustmanager.algorithm = PKIX 16:15:32 policy-pap | ssl.truststore.certificates = null 16:15:32 policy-pap | ssl.truststore.location = null 16:15:32 policy-pap | ssl.truststore.password = null 16:15:32 policy-pap | ssl.truststore.type = JKS 16:15:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | 16:15:32 policy-pap | [2024-03-20T16:13:34.476+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 policy-pap | [2024-03-20T16:13:34.477+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | select 'upgrade to 1100 completed' as msg 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | msg 16:15:32 policy-db-migrator | upgrade to 1100 completed 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0120-audit_sequence.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | TRUNCATE TABLE sequence 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE pdpstatistics 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | DROP TABLE statistics_sequence 16:15:32 policy-db-migrator | -------------- 16:15:32 policy-db-migrator | 16:15:32 policy-db-migrator | policyadmin: OK: upgrade (1300) 16:15:32 policy-db-migrator | name version 16:15:32 policy-db-migrator | policyadmin 1300 16:15:32 policy-db-migrator | ID script operation from_version to_version tag success atTime 16:15:32 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-pap | [2024-03-20T16:13:34.477+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951214475 16:15:32 policy-pap | [2024-03-20T16:13:34.479+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-1, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Subscribed to topic(s): policy-pdp-pap 16:15:32 policy-pap | [2024-03-20T16:13:34.480+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:15:32 policy-pap | allow.auto.create.topics = true 16:15:32 policy-pap | auto.commit.interval.ms = 5000 16:15:32 policy-pap | auto.include.jmx.reporter = true 16:15:32 policy-pap | auto.offset.reset = latest 16:15:32 policy-pap | bootstrap.servers = [kafka:9092] 16:15:32 policy-pap | check.crcs = true 16:15:32 policy-pap | client.dns.lookup = use_all_dns_ips 16:15:32 policy-pap | client.id = consumer-policy-pap-2 16:15:32 policy-pap | client.rack = 16:15:32 policy-pap | connections.max.idle.ms = 540000 16:15:32 policy-pap | default.api.timeout.ms = 60000 16:15:32 policy-pap | enable.auto.commit = true 16:15:32 policy-pap | exclude.internal.topics = true 16:15:32 policy-pap | fetch.max.bytes = 52428800 16:15:32 policy-pap | fetch.max.wait.ms = 500 16:15:32 policy-pap | fetch.min.bytes = 1 16:15:32 policy-pap | group.id = policy-pap 16:15:32 policy-pap | group.instance.id = null 16:15:32 policy-pap | heartbeat.interval.ms = 3000 16:15:32 policy-pap | interceptor.classes = [] 16:15:32 policy-pap | internal.leave.group.on.close = true 16:15:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:15:32 policy-pap | isolation.level = read_uncommitted 16:15:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | max.partition.fetch.bytes = 1048576 16:15:32 policy-pap | max.poll.interval.ms = 300000 16:15:32 policy-pap | max.poll.records = 500 16:15:32 policy-pap | metadata.max.age.ms = 300000 16:15:32 policy-pap | metric.reporters = [] 16:15:32 policy-pap | metrics.num.samples = 2 16:15:32 policy-pap | metrics.recording.level = INFO 16:15:32 policy-pap | metrics.sample.window.ms = 30000 16:15:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:15:32 policy-pap | receive.buffer.bytes = 65536 16:15:32 policy-pap | reconnect.backoff.max.ms = 1000 16:15:32 policy-pap | reconnect.backoff.ms = 50 16:15:32 policy-pap | request.timeout.ms = 30000 16:15:32 policy-pap | retry.backoff.ms = 100 16:15:32 policy-pap | sasl.client.callback.handler.class = null 16:15:32 policy-pap | sasl.jaas.config = null 16:15:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-pap | sasl.kerberos.service.name = null 16:15:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 policy-pap | sasl.login.callback.handler.class = null 16:15:32 policy-pap | sasl.login.class = null 16:15:32 policy-pap | sasl.login.connect.timeout.ms = null 16:15:32 policy-pap | sasl.login.read.timeout.ms = null 16:15:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:15:32 policy-pap | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.login.retry.backoff.ms = 100 16:15:32 policy-pap | sasl.mechanism = GSSAPI 16:15:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-pap | sasl.oauthbearer.expected.audience = null 16:15:32 policy-pap | sasl.oauthbearer.expected.issuer = null 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:05 16:15:32 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-apex-pdp | security.protocol = PLAINTEXT 16:15:32 policy-apex-pdp | security.providers = null 16:15:32 policy-apex-pdp | send.buffer.bytes = 131072 16:15:32 policy-apex-pdp | session.timeout.ms = 45000 16:15:32 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-apex-pdp | ssl.cipher.suites = null 16:15:32 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 16:15:32 policy-apex-pdp | ssl.engine.factory.class = null 16:15:32 policy-apex-pdp | ssl.key.password = null 16:15:32 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 16:15:32 policy-apex-pdp | ssl.keystore.certificate.chain = null 16:15:32 policy-apex-pdp | ssl.keystore.key = null 16:15:32 policy-apex-pdp | ssl.keystore.location = null 16:15:32 policy-apex-pdp | ssl.keystore.password = null 16:15:32 policy-apex-pdp | ssl.keystore.type = JKS 16:15:32 policy-apex-pdp | ssl.protocol = TLSv1.3 16:15:32 policy-apex-pdp | ssl.provider = null 16:15:32 policy-apex-pdp | ssl.secure.random.implementation = null 16:15:32 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 16:15:32 policy-apex-pdp | ssl.truststore.certificates = null 16:15:32 policy-apex-pdp | ssl.truststore.location = null 16:15:32 policy-apex-pdp | ssl.truststore.password = null 16:15:32 policy-apex-pdp | ssl.truststore.type = JKS 16:15:32 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-apex-pdp | 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.364+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.364+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.364+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951217362 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.367+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-1, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Subscribed to topic(s): policy-pdp-pap 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.388+00:00|INFO|ServiceManager|main] service manager starting 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.389+00:00|INFO|ServiceManager|main] service manager starting topics 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.393+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f5e0a58-910b-431c-bb29-e00354420c7f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.421+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:15:32 policy-apex-pdp | allow.auto.create.topics = true 16:15:32 policy-apex-pdp | auto.commit.interval.ms = 5000 16:15:32 policy-apex-pdp | auto.include.jmx.reporter = true 16:15:32 policy-apex-pdp | auto.offset.reset = latest 16:15:32 policy-apex-pdp | bootstrap.servers = [kafka:9092] 16:15:32 policy-apex-pdp | check.crcs = true 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.620462048Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=844.163µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.624318575Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.62940069Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.085115ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.632852532Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.632920763Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=69.071µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.638654008Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.639556561Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=902.413µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.643701933Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.644655657Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=954.564µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.647793444Z level=info msg="Executing migration" id="Drop old annotation table v4" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.647875475Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=82.451µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.653742702Z level=info msg="Executing migration" id="create annotation table v5" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.654889279Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.146557ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.659145202Z level=info msg="Executing migration" id="add index annotation 0 v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.660523023Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.377001ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.66366534Z level=info msg="Executing migration" id="add index annotation 1 v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.668770856Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=5.093935ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.673326833Z level=info msg="Executing migration" id="add index annotation 2 v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.675257262Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.930809ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.68123625Z level=info msg="Executing migration" id="add index annotation 3 v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.682258966Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.023036ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.68654771Z level=info msg="Executing migration" id="add index annotation 4 v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.687631876Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.084256ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.69467812Z level=info msg="Executing migration" id="Update annotation table charset" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.694704251Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.191µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.699055575Z level=info msg="Executing migration" id="Add column region_id to annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.707492541Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=8.434526ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.711149935Z level=info msg="Executing migration" id="Drop category_id index" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.71217226Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.031745ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.715443669Z level=info msg="Executing migration" id="Add column tags to annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.718586346Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.142136ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.722659586Z level=info msg="Executing migration" id="Create annotation_tag table v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.723554029Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=893.613µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.726830208Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.727885884Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.055176ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.732200667Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.733354265Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.154068ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.737319584Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.749414803Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=12.091779ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.753037617Z level=info msg="Executing migration" id="Create annotation_tag table v3" 16:15:32 kafka | [2024-03-20 16:13:36,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,878] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,880] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,881] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,882] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,883] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,883] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,884] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,885] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,896] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 16:15:32 policy-apex-pdp | client.id = consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2 16:15:32 policy-apex-pdp | client.rack = 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.753746838Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=710.771µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.758838213Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.760295815Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.456982ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.76398231Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.764411246Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=428.396µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.768795071Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.769609573Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=813.622µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.773171886Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.773528962Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=355.536µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.778014998Z level=info msg="Executing migration" id="Add created time to annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.783683453Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.667715ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.790575535Z level=info msg="Executing migration" id="Add updated time to annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.794799237Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.225752ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.878270428Z level=info msg="Executing migration" id="Add index for created in annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.880239367Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.97426ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.887035128Z level=info msg="Executing migration" id="Add index for updated in annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.889068308Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=2.0369ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.89458484Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.894966006Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=374.715µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.900148922Z level=info msg="Executing migration" id="Add epoch_end column" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.904588079Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.438316ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.911659663Z level=info msg="Executing migration" id="Add index for epoch_end" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.91279298Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.189287ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.917960097Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.918240651Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=280.424µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.924401503Z level=info msg="Executing migration" id="Move region to single row" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.925104033Z level=info msg="Migration successfully executed" id="Move region to single row" duration=702.93µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.932636905Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.935141752Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=2.502507ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.94443445Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.945962813Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.524523ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.950154505Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.951267902Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.113277ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.95985598Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.961042097Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.187687ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.968102512Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.969164198Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.060726ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.9740201Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.976198742Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=2.178242ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.982889422Z level=info msg="Executing migration" id="Increase tags column to length 4096" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.983082974Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=193.322µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.992150669Z level=info msg="Executing migration" id="create test_data table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:02.993801754Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.656725ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.002753857Z level=info msg="Executing migration" id="create dashboard_version table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.004174756Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.422109ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.011809686Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.012735355Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=926.129µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.018868978Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.020471814Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.604616ms 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,897] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,898] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,898] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,898] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,898] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,899] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,900] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:15:32 policy-pap | security.protocol = PLAINTEXT 16:15:32 policy-pap | security.providers = null 16:15:32 policy-pap | send.buffer.bytes = 131072 16:15:32 policy-pap | session.timeout.ms = 45000 16:15:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-pap | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-pap | ssl.cipher.suites = null 16:15:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-pap | ssl.endpoint.identification.algorithm = https 16:15:32 policy-pap | ssl.engine.factory.class = null 16:15:32 policy-pap | ssl.key.password = null 16:15:32 policy-pap | ssl.keymanager.algorithm = SunX509 16:15:32 policy-pap | ssl.keystore.certificate.chain = null 16:15:32 policy-pap | ssl.keystore.key = null 16:15:32 policy-pap | ssl.keystore.location = null 16:15:32 policy-pap | ssl.keystore.password = null 16:15:32 policy-pap | ssl.keystore.type = JKS 16:15:32 policy-pap | ssl.protocol = TLSv1.3 16:15:32 policy-pap | ssl.provider = null 16:15:32 policy-pap | ssl.secure.random.implementation = null 16:15:32 policy-pap | ssl.trustmanager.algorithm = PKIX 16:15:32 policy-pap | ssl.truststore.certificates = null 16:15:32 policy-pap | ssl.truststore.location = null 16:15:32 policy-pap | ssl.truststore.password = null 16:15:32 policy-pap | ssl.truststore.type = JKS 16:15:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | 16:15:32 policy-pap | [2024-03-20T16:13:34.486+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 policy-pap | [2024-03-20T16:13:34.486+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 policy-pap | [2024-03-20T16:13:34.486+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951214486 16:15:32 policy-pap | [2024-03-20T16:13:34.486+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.025920339Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.026122311Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=201.902µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.02993898Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.030313124Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=374.254µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.032517236Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.032586397Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=69.711µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.037213194Z level=info msg="Executing migration" id="create team table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.038427316Z level=info msg="Migration successfully executed" id="create team table" duration=1.227823ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.046401167Z level=info msg="Executing migration" id="add index team.org_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.048350816Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.947259ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.0546521Z level=info msg="Executing migration" id="add unique index team_org_id_name" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.056279967Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.627697ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.060093845Z level=info msg="Executing migration" id="Add column uid in team" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.066184127Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=6.089922ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.07244274Z level=info msg="Executing migration" id="Update uid column values in team" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.072625572Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=186.302µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.078921496Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.080231849Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.310443ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.085544963Z level=info msg="Executing migration" id="create team member table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.086810376Z level=info msg="Migration successfully executed" id="create team member table" duration=1.263823ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.093564614Z level=info msg="Executing migration" id="add index team_member.org_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.09509737Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.532866ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.103780648Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.104715057Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=934.359µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.11091377Z level=info msg="Executing migration" id="add index team_member.team_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.112503906Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.589396ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.118332165Z level=info msg="Executing migration" id="Add column email to team table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.123183805Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.85129ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.133281267Z level=info msg="Executing migration" id="Add column external to team_member table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.139718892Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.433905ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.144837914Z level=info msg="Executing migration" id="Add column permission to team_member table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.150281819Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.441815ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.162293091Z level=info msg="Executing migration" id="create dashboard acl table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.163302801Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.01492ms 16:15:32 policy-pap | [2024-03-20T16:13:34.819+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 16:15:32 policy-pap | [2024-03-20T16:13:34.966+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 16:15:32 policy-pap | [2024-03-20T16:13:35.216+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@3879feec, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@71d2261e, org.springframework.security.web.context.SecurityContextHolderFilter@399fd710, org.springframework.security.web.header.HeaderWriterFilter@7bd7d71c, org.springframework.security.web.authentication.logout.LogoutFilter@6cbb6c41, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@dcdb883, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1f013047, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@51566ce0, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@53917c92, org.springframework.security.web.access.ExceptionTranslationFilter@7c6ab057, org.springframework.security.web.access.intercept.AuthorizationFilter@6f89ad03] 16:15:32 policy-pap | [2024-03-20T16:13:36.031+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 16:15:32 policy-pap | [2024-03-20T16:13:36.133+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 16:15:32 policy-pap | [2024-03-20T16:13:36.151+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 16:15:32 policy-pap | [2024-03-20T16:13:36.171+00:00|INFO|ServiceManager|main] Policy PAP starting 16:15:32 policy-pap | [2024-03-20T16:13:36.171+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 16:15:32 policy-pap | [2024-03-20T16:13:36.172+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 16:15:32 policy-pap | [2024-03-20T16:13:36.173+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 16:15:32 policy-pap | [2024-03-20T16:13:36.173+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 16:15:32 policy-pap | [2024-03-20T16:13:36.174+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 16:15:32 policy-pap | [2024-03-20T16:13:36.174+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 16:15:32 policy-pap | [2024-03-20T16:13:36.178+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cd3571d2-bf35-4e38-b6c0-741ea8425298, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2a5ed225 16:15:32 policy-pap | [2024-03-20T16:13:36.191+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cd3571d2-bf35-4e38-b6c0-741ea8425298, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:15:32 policy-pap | [2024-03-20T16:13:36.192+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:15:32 policy-pap | allow.auto.create.topics = true 16:15:32 policy-pap | auto.commit.interval.ms = 5000 16:15:32 policy-pap | auto.include.jmx.reporter = true 16:15:32 policy-pap | auto.offset.reset = latest 16:15:32 policy-pap | bootstrap.servers = [kafka:9092] 16:15:32 policy-pap | check.crcs = true 16:15:32 policy-pap | client.dns.lookup = use_all_dns_ips 16:15:32 policy-pap | client.id = consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3 16:15:32 policy-pap | client.rack = 16:15:32 policy-pap | connections.max.idle.ms = 540000 16:15:32 policy-pap | default.api.timeout.ms = 60000 16:15:32 policy-pap | enable.auto.commit = true 16:15:32 policy-pap | exclude.internal.topics = true 16:15:32 policy-pap | fetch.max.bytes = 52428800 16:15:32 policy-pap | fetch.max.wait.ms = 500 16:15:32 policy-pap | fetch.min.bytes = 1 16:15:32 policy-pap | group.id = cd3571d2-bf35-4e38-b6c0-741ea8425298 16:15:32 policy-pap | group.instance.id = null 16:15:32 policy-pap | heartbeat.interval.ms = 3000 16:15:32 policy-pap | interceptor.classes = [] 16:15:32 policy-pap | internal.leave.group.on.close = true 16:15:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:15:32 policy-pap | isolation.level = read_uncommitted 16:15:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | max.partition.fetch.bytes = 1048576 16:15:32 policy-pap | max.poll.interval.ms = 300000 16:15:32 policy-pap | max.poll.records = 500 16:15:32 policy-pap | metadata.max.age.ms = 300000 16:15:32 policy-pap | metric.reporters = [] 16:15:32 policy-pap | metrics.num.samples = 2 16:15:32 policy-pap | metrics.recording.level = INFO 16:15:32 policy-pap | metrics.sample.window.ms = 30000 16:15:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:15:32 policy-pap | receive.buffer.bytes = 65536 16:15:32 policy-pap | reconnect.backoff.max.ms = 1000 16:15:32 policy-pap | reconnect.backoff.ms = 50 16:15:32 policy-pap | request.timeout.ms = 30000 16:15:32 policy-pap | retry.backoff.ms = 100 16:15:32 policy-pap | sasl.client.callback.handler.class = null 16:15:32 policy-pap | sasl.jaas.config = null 16:15:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-pap | sasl.kerberos.service.name = null 16:15:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.171737957Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.173665246Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.880099ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.180390824Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.182295334Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.90818ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.193396076Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.194490247Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.095071ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.200204865Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.201274166Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.070701ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.207013184Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.208253147Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.240053ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.216928124Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.217936115Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.007711ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.241526234Z level=info msg="Executing migration" id="add index dashboard_permission" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.24310082Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.575346ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.249739027Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.250604786Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=864.999µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.258993151Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.259481686Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=488.345µs 16:15:32 policy-pap | sasl.login.callback.handler.class = null 16:15:32 policy-pap | sasl.login.class = null 16:15:32 policy-pap | sasl.login.connect.timeout.ms = null 16:15:32 policy-pap | sasl.login.read.timeout.ms = null 16:15:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:15:32 policy-pap | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.login.retry.backoff.ms = 100 16:15:32 policy-pap | sasl.mechanism = GSSAPI 16:15:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-pap | sasl.oauthbearer.expected.audience = null 16:15:32 policy-pap | sasl.oauthbearer.expected.issuer = null 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:15:32 policy-pap | security.protocol = PLAINTEXT 16:15:32 policy-pap | security.providers = null 16:15:32 policy-pap | send.buffer.bytes = 131072 16:15:32 policy-pap | session.timeout.ms = 45000 16:15:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-pap | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-pap | ssl.cipher.suites = null 16:15:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-pap | ssl.endpoint.identification.algorithm = https 16:15:32 policy-pap | ssl.engine.factory.class = null 16:15:32 policy-pap | ssl.key.password = null 16:15:32 policy-pap | ssl.keymanager.algorithm = SunX509 16:15:32 policy-pap | ssl.keystore.certificate.chain = null 16:15:32 policy-pap | ssl.keystore.key = null 16:15:32 policy-pap | ssl.keystore.location = null 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,901] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,902] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,902] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,902] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,902] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:36,904] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,053] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,054] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,055] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,056] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 16:15:32 simulator | overriding logback.xml 16:15:32 simulator | 2024-03-20 16:13:02,728 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 16:15:32 simulator | 2024-03-20 16:13:02,793 INFO org.onap.policy.models.simulators starting 16:15:32 simulator | 2024-03-20 16:13:02,794 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 16:15:32 simulator | 2024-03-20 16:13:02,993 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 16:15:32 simulator | 2024-03-20 16:13:02,994 INFO org.onap.policy.models.simulators starting A&AI simulator 16:15:32 simulator | 2024-03-20 16:13:03,104 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:15:32 simulator | 2024-03-20 16:13:03,117 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:03,123 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:03,128 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:15:32 simulator | 2024-03-20 16:13:03,200 INFO Session workerName=node0 16:15:32 simulator | 2024-03-20 16:13:03,752 INFO Using GSON for REST calls 16:15:32 simulator | 2024-03-20 16:13:03,825 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 16:15:32 simulator | 2024-03-20 16:13:03,833 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 16:15:32 simulator | 2024-03-20 16:13:03,847 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1621ms 16:15:32 policy-apex-pdp | connections.max.idle.ms = 540000 16:15:32 policy-apex-pdp | default.api.timeout.ms = 60000 16:15:32 policy-apex-pdp | enable.auto.commit = true 16:15:32 policy-apex-pdp | exclude.internal.topics = true 16:15:32 policy-apex-pdp | fetch.max.bytes = 52428800 16:15:32 policy-apex-pdp | fetch.max.wait.ms = 500 16:15:32 policy-apex-pdp | fetch.min.bytes = 1 16:15:32 policy-apex-pdp | group.id = 2f5e0a58-910b-431c-bb29-e00354420c7f 16:15:32 policy-apex-pdp | group.instance.id = null 16:15:32 policy-apex-pdp | heartbeat.interval.ms = 3000 16:15:32 policy-apex-pdp | interceptor.classes = [] 16:15:32 policy-apex-pdp | internal.leave.group.on.close = true 16:15:32 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 16:15:32 policy-apex-pdp | isolation.level = read_uncommitted 16:15:32 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-apex-pdp | max.partition.fetch.bytes = 1048576 16:15:32 policy-apex-pdp | max.poll.interval.ms = 300000 16:15:32 policy-apex-pdp | max.poll.records = 500 16:15:32 policy-apex-pdp | metadata.max.age.ms = 300000 16:15:32 policy-apex-pdp | metric.reporters = [] 16:15:32 policy-apex-pdp | metrics.num.samples = 2 16:15:32 policy-apex-pdp | metrics.recording.level = INFO 16:15:32 policy-apex-pdp | metrics.sample.window.ms = 30000 16:15:32 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:15:32 policy-apex-pdp | receive.buffer.bytes = 65536 16:15:32 policy-apex-pdp | reconnect.backoff.max.ms = 1000 16:15:32 policy-apex-pdp | reconnect.backoff.ms = 50 16:15:32 policy-apex-pdp | request.timeout.ms = 30000 16:15:32 policy-apex-pdp | retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.client.callback.handler.class = null 16:15:32 policy-apex-pdp | sasl.jaas.config = null 16:15:32 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-apex-pdp | sasl.kerberos.service.name = null 16:15:32 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 policy-apex-pdp | sasl.login.callback.handler.class = null 16:15:32 policy-apex-pdp | sasl.login.class = null 16:15:32 policy-apex-pdp | sasl.login.connect.timeout.ms = null 16:15:32 policy-apex-pdp | sasl.login.read.timeout.ms = null 16:15:32 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 16:15:32 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 16:15:32 simulator | 2024-03-20 16:13:03,848 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4272 ms. 16:15:32 simulator | 2024-03-20 16:13:03,867 INFO org.onap.policy.models.simulators starting SDNC simulator 16:15:32 simulator | 2024-03-20 16:13:03,886 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:15:32 simulator | 2024-03-20 16:13:03,887 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:03,889 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:03,890 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:15:32 simulator | 2024-03-20 16:13:03,903 INFO Session workerName=node0 16:15:32 simulator | 2024-03-20 16:13:03,955 INFO Using GSON for REST calls 16:15:32 simulator | 2024-03-20 16:13:03,964 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 16:15:32 simulator | 2024-03-20 16:13:03,965 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 16:15:32 simulator | 2024-03-20 16:13:03,966 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1739ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.266766269Z level=info msg="Executing migration" id="create tag table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.268156644Z level=info msg="Migration successfully executed" id="create tag table" duration=1.390295ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.274660139Z level=info msg="Executing migration" id="add index tag.key_value" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.276576289Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.91577ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.283757922Z level=info msg="Executing migration" id="create login attempt table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.284692391Z level=info msg="Migration successfully executed" id="create login attempt table" duration=934.719µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.289655981Z level=info msg="Executing migration" id="add index login_attempt.username" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.290644041Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=987.97µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.298171578Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.299807524Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.635796ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.348873531Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.365129306Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=16.260525ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.371779764Z level=info msg="Executing migration" id="create login_attempt v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.372853104Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.0731ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.379126168Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.38035484Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.215122ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.386653154Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.387100449Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=453.155µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.396166741Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.39712132Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=954.329µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.402533985Z level=info msg="Executing migration" id="create user auth table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.403349483Z level=info msg="Migration successfully executed" id="create user auth table" duration=816.728µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.409719578Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.412528876Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=2.814638ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.421027053Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.421134584Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=108.231µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.429886762Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.436217546Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.330984ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.442753133Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.448126587Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.374844ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.452279189Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.460065708Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=7.776639ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.465568624Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.473834678Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.271494ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.479081261Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.479764788Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=687.517µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.489204063Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.495142044Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.939521ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.502449458Z level=info msg="Executing migration" id="create server_lock table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.503548869Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.107722ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.509499499Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.511171416Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.675287ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.518173647Z level=info msg="Executing migration" id="create user auth token table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.519187447Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.02ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.53028179Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 16:15:32 simulator | 2024-03-20 16:13:03,966 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 16:15:32 simulator | 2024-03-20 16:13:03,972 INFO org.onap.policy.models.simulators starting SO simulator 16:15:32 simulator | 2024-03-20 16:13:03,974 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:15:32 simulator | 2024-03-20 16:13:03,975 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:03,977 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:03,978 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:15:32 simulator | 2024-03-20 16:13:03,984 INFO Session workerName=node0 16:15:32 simulator | 2024-03-20 16:13:04,061 INFO Using GSON for REST calls 16:15:32 simulator | 2024-03-20 16:13:04,077 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 16:15:32 simulator | 2024-03-20 16:13:04,078 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 16:15:32 simulator | 2024-03-20 16:13:04,079 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1853ms 16:15:32 simulator | 2024-03-20 16:13:04,079 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4897 ms. 16:15:32 simulator | 2024-03-20 16:13:04,080 INFO org.onap.policy.models.simulators starting VFC simulator 16:15:32 simulator | 2024-03-20 16:13:04,082 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 16:15:32 simulator | 2024-03-20 16:13:04,082 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:04,093 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 simulator | 2024-03-20 16:13:04,094 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 16:15:32 simulator | 2024-03-20 16:13:04,105 INFO Session workerName=node0 16:15:32 simulator | 2024-03-20 16:13:04,144 INFO Using GSON for REST calls 16:15:32 simulator | 2024-03-20 16:13:04,152 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 16:15:32 simulator | 2024-03-20 16:13:04,153 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 16:15:32 simulator | 2024-03-20 16:13:04,153 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1927ms 16:15:32 simulator | 2024-03-20 16:13:04,153 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4930 ms. 16:15:32 simulator | 2024-03-20 16:13:04,154 INFO org.onap.policy.models.simulators started 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.532113748Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.836948ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.539056689Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.5401586Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.093431ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.545305192Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.546040319Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=735.817µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.553920209Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.559479825Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.565126ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.566131863Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 16:15:32 policy-pap | ssl.keystore.password = null 16:15:32 policy-pap | ssl.keystore.type = JKS 16:15:32 policy-pap | ssl.protocol = TLSv1.3 16:15:32 policy-pap | ssl.provider = null 16:15:32 policy-pap | ssl.secure.random.implementation = null 16:15:32 policy-pap | ssl.trustmanager.algorithm = PKIX 16:15:32 policy-pap | ssl.truststore.certificates = null 16:15:32 policy-pap | ssl.truststore.location = null 16:15:32 policy-pap | ssl.truststore.password = null 16:15:32 policy-pap | ssl.truststore.type = JKS 16:15:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | 16:15:32 policy-pap | [2024-03-20T16:13:36.198+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 policy-pap | [2024-03-20T16:13:36.198+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 policy-pap | [2024-03-20T16:13:36.198+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951216198 16:15:32 policy-pap | [2024-03-20T16:13:36.198+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Subscribed to topic(s): policy-pdp-pap 16:15:32 policy-pap | [2024-03-20T16:13:36.198+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 16:15:32 policy-pap | [2024-03-20T16:13:36.198+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68ecf45c-a300-46bc-a23d-d6558b61d71d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4c6fabba 16:15:32 policy-pap | [2024-03-20T16:13:36.199+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68ecf45c-a300-46bc-a23d-d6558b61d71d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:15:32 policy-pap | [2024-03-20T16:13:36.199+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 16:15:32 policy-pap | allow.auto.create.topics = true 16:15:32 policy-pap | auto.commit.interval.ms = 5000 16:15:32 policy-pap | auto.include.jmx.reporter = true 16:15:32 policy-pap | auto.offset.reset = latest 16:15:32 policy-pap | bootstrap.servers = [kafka:9092] 16:15:32 policy-pap | check.crcs = true 16:15:32 policy-pap | client.dns.lookup = use_all_dns_ips 16:15:32 policy-pap | client.id = consumer-policy-pap-4 16:15:32 policy-pap | client.rack = 16:15:32 policy-pap | connections.max.idle.ms = 540000 16:15:32 policy-pap | default.api.timeout.ms = 60000 16:15:32 policy-pap | enable.auto.commit = true 16:15:32 policy-pap | exclude.internal.topics = true 16:15:32 policy-pap | fetch.max.bytes = 52428800 16:15:32 policy-pap | fetch.max.wait.ms = 500 16:15:32 policy-pap | fetch.min.bytes = 1 16:15:32 policy-pap | group.id = policy-pap 16:15:32 policy-pap | group.instance.id = null 16:15:32 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:06 16:15:32 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:07 16:15:32 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-pap | heartbeat.interval.ms = 3000 16:15:32 policy-pap | interceptor.classes = [] 16:15:32 policy-pap | internal.leave.group.on.close = true 16:15:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 16:15:32 policy-pap | isolation.level = read_uncommitted 16:15:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | max.partition.fetch.bytes = 1048576 16:15:32 policy-pap | max.poll.interval.ms = 300000 16:15:32 policy-pap | max.poll.records = 500 16:15:32 policy-pap | metadata.max.age.ms = 300000 16:15:32 policy-pap | metric.reporters = [] 16:15:32 policy-pap | metrics.num.samples = 2 16:15:32 policy-pap | metrics.recording.level = INFO 16:15:32 policy-pap | metrics.sample.window.ms = 30000 16:15:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 16:15:32 policy-pap | receive.buffer.bytes = 65536 16:15:32 policy-pap | reconnect.backoff.max.ms = 1000 16:15:32 policy-pap | reconnect.backoff.ms = 50 16:15:32 policy-pap | request.timeout.ms = 30000 16:15:32 policy-pap | retry.backoff.ms = 100 16:15:32 policy-pap | sasl.client.callback.handler.class = null 16:15:32 policy-pap | sasl.jaas.config = null 16:15:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-pap | sasl.kerberos.service.name = null 16:15:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 policy-pap | sasl.login.callback.handler.class = null 16:15:32 policy-pap | sasl.login.class = null 16:15:32 policy-pap | sasl.login.connect.timeout.ms = null 16:15:32 policy-pap | sasl.login.read.timeout.ms = null 16:15:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:15:32 policy-pap | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.login.retry.backoff.ms = 100 16:15:32 policy-pap | sasl.mechanism = GSSAPI 16:15:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-pap | sasl.oauthbearer.expected.audience = null 16:15:32 policy-pap | sasl.oauthbearer.expected.issuer = null 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:15:32 policy-pap | security.protocol = PLAINTEXT 16:15:32 policy-pap | security.providers = null 16:15:32 policy-pap | send.buffer.bytes = 131072 16:15:32 policy-pap | session.timeout.ms = 45000 16:15:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-pap | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-pap | ssl.cipher.suites = null 16:15:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-pap | ssl.endpoint.identification.algorithm = https 16:15:32 policy-pap | ssl.engine.factory.class = null 16:15:32 policy-pap | ssl.key.password = null 16:15:32 policy-pap | ssl.keymanager.algorithm = SunX509 16:15:32 policy-pap | ssl.keystore.certificate.chain = null 16:15:32 policy-pap | ssl.keystore.key = null 16:15:32 policy-pap | ssl.keystore.location = null 16:15:32 policy-pap | ssl.keystore.password = null 16:15:32 policy-pap | ssl.keystore.type = JKS 16:15:32 policy-pap | ssl.protocol = TLSv1.3 16:15:32 policy-pap | ssl.provider = null 16:15:32 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:08 16:15:32 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2003241613050800u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:09 16:15:32 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2003241613050900u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2003241613051000u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2003241613051100u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2003241613051200u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2003241613051200u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2003241613051200u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2003241613051200u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2003241613051300u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2003241613051300u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2003241613051300u 1 2024-03-20 16:13:10 16:15:32 policy-db-migrator | policyadmin: OK @ 1300 16:15:32 policy-pap | ssl.secure.random.implementation = null 16:15:32 policy-pap | ssl.trustmanager.algorithm = PKIX 16:15:32 policy-pap | ssl.truststore.certificates = null 16:15:32 policy-pap | ssl.truststore.location = null 16:15:32 policy-pap | ssl.truststore.password = null 16:15:32 policy-pap | ssl.truststore.type = JKS 16:15:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-pap | 16:15:32 policy-pap | [2024-03-20T16:13:36.204+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 policy-pap | [2024-03-20T16:13:36.204+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 policy-pap | [2024-03-20T16:13:36.204+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951216204 16:15:32 policy-pap | [2024-03-20T16:13:36.204+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 16:15:32 policy-pap | [2024-03-20T16:13:36.205+00:00|INFO|ServiceManager|main] Policy PAP starting topics 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,057] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,061] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,061] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,061] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.567296855Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.165242ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.577129994Z level=info msg="Executing migration" id="create cache_data table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.578091914Z level=info msg="Migration successfully executed" id="create cache_data table" duration=961.88µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.582775162Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.583871713Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.096732ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.586988294Z level=info msg="Executing migration" id="create short_url table v1" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.588148646Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.160172ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.594251218Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.595175447Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=923.999µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.603675053Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.603787674Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=119.821µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.611425892Z level=info msg="Executing migration" id="delete alert_definition table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.611627034Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=201.052µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.619324882Z level=info msg="Executing migration" id="recreate alert_definition table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.620580565Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.255423ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.624212191Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.625543065Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.330814ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.628714047Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.629684677Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=970.46µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.636245893Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.636453755Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=213.792µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.646048213Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.64773749Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.689407ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.651586909Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.652521648Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=935.079µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.656520269Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.657486479Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=966.02µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.662271277Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.663858163Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.582936ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.667104146Z level=info msg="Executing migration" id="Add column paused in alert_definition" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.674811674Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.705918ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.678739764Z level=info msg="Executing migration" id="drop alert_definition table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.679630743Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=890.529µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.686567463Z level=info msg="Executing migration" id="delete alert_definition_version table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.686649334Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=82.631µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.691721905Z level=info msg="Executing migration" id="recreate alert_definition_version table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.693336302Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.613907ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.697826947Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.698768797Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=941.83µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.701825008Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.702796358Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=970.92µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.707170822Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.707237143Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=66.891µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.710206473Z level=info msg="Executing migration" id="drop alert_definition_version table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.711041711Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=834.818µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.71392964Z level=info msg="Executing migration" id="create alert_instance table" 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,062] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,063] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,064] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.71485201Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=921.36µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.719332725Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 16:15:32 policy-apex-pdp | sasl.mechanism = GSSAPI 16:15:32 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 16:15:32 policy-apex-pdp | security.protocol = PLAINTEXT 16:15:32 policy-apex-pdp | security.providers = null 16:15:32 policy-apex-pdp | send.buffer.bytes = 131072 16:15:32 policy-apex-pdp | session.timeout.ms = 45000 16:15:32 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-apex-pdp | ssl.cipher.suites = null 16:15:32 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 16:15:32 policy-apex-pdp | ssl.engine.factory.class = null 16:15:32 policy-apex-pdp | ssl.key.password = null 16:15:32 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 16:15:32 policy-apex-pdp | ssl.keystore.certificate.chain = null 16:15:32 policy-apex-pdp | ssl.keystore.key = null 16:15:32 policy-apex-pdp | ssl.keystore.location = null 16:15:32 policy-apex-pdp | ssl.keystore.password = null 16:15:32 policy-apex-pdp | ssl.keystore.type = JKS 16:15:32 policy-apex-pdp | ssl.protocol = TLSv1.3 16:15:32 policy-apex-pdp | ssl.provider = null 16:15:32 policy-apex-pdp | ssl.secure.random.implementation = null 16:15:32 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 16:15:32 policy-apex-pdp | ssl.truststore.certificates = null 16:15:32 policy-apex-pdp | ssl.truststore.location = null 16:15:32 policy-apex-pdp | ssl.truststore.password = null 16:15:32 policy-apex-pdp | ssl.truststore.type = JKS 16:15:32 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 16:15:32 policy-apex-pdp | 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.429+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.429+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.429+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951217429 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.430+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Subscribed to topic(s): policy-pdp-pap 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.430+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d606158a-bd1e-4eca-9278-d85f03ab1cd0, alive=false, publisher=null]]: starting 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.444+00:00|INFO|ProducerConfig|main] ProducerConfig values: 16:15:32 policy-apex-pdp | acks = -1 16:15:32 policy-apex-pdp | auto.include.jmx.reporter = true 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.720355826Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.022831ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.723102903Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.724126234Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.020931ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.726589959Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.732344197Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.753648ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.73659325Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.737235707Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=642.767µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.740309268Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.740925284Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=615.926µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.74345654Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.766799176Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.334646ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.80868133Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.833098727Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=24.416918ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.836059987Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.837033127Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=973.02µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.842104538Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.843062228Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=957.58µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.84820425Z level=info msg="Executing migration" id="add current_reason column related to current_state" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.855238041Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.034161ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.861185132Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.86597508Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.793599ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.86996334Z level=info msg="Executing migration" id="create alert_rule table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.870773219Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=810.879µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.87487624Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.875715329Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=838.999µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.878508037Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.879303405Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=794.978µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.882471937Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.883301016Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=828.969µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.888262746Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.888391517Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=107.731µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.891177075Z level=info msg="Executing migration" id="add column for to alert_rule" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.89561757Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.438995ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.908071317Z level=info msg="Executing migration" id="add column annotations to alert_rule" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.913973216Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.902169ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.923631674Z level=info msg="Executing migration" id="add column labels to alert_rule" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.927883697Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.252533ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.939405174Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.940236723Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=831.219µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.955964212Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.956825171Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=862.039µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.966918203Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.971398828Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.482755ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.983673803Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:03.989825495Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.152882ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.033404527Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 16:15:32 policy-apex-pdp | batch.size = 16384 16:15:32 policy-apex-pdp | bootstrap.servers = [kafka:9092] 16:15:32 policy-apex-pdp | buffer.memory = 33554432 16:15:32 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 16:15:32 policy-apex-pdp | client.id = producer-1 16:15:32 policy-apex-pdp | compression.type = none 16:15:32 policy-apex-pdp | connections.max.idle.ms = 540000 16:15:32 policy-apex-pdp | delivery.timeout.ms = 120000 16:15:32 policy-apex-pdp | enable.idempotence = true 16:15:32 policy-apex-pdp | interceptor.classes = [] 16:15:32 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:15:32 policy-apex-pdp | linger.ms = 0 16:15:32 policy-apex-pdp | max.block.ms = 60000 16:15:32 policy-apex-pdp | max.in.flight.requests.per.connection = 5 16:15:32 policy-apex-pdp | max.request.size = 1048576 16:15:32 policy-apex-pdp | metadata.max.age.ms = 300000 16:15:32 policy-apex-pdp | metadata.max.idle.ms = 300000 16:15:32 policy-apex-pdp | metric.reporters = [] 16:15:32 policy-apex-pdp | metrics.num.samples = 2 16:15:32 policy-apex-pdp | metrics.recording.level = INFO 16:15:32 policy-apex-pdp | metrics.sample.window.ms = 30000 16:15:32 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 16:15:32 policy-apex-pdp | partitioner.availability.timeout.ms = 0 16:15:32 policy-apex-pdp | partitioner.class = null 16:15:32 policy-apex-pdp | partitioner.ignore.keys = false 16:15:32 policy-apex-pdp | receive.buffer.bytes = 32768 16:15:32 policy-apex-pdp | reconnect.backoff.max.ms = 1000 16:15:32 policy-apex-pdp | reconnect.backoff.ms = 50 16:15:32 policy-apex-pdp | request.timeout.ms = 30000 16:15:32 policy-apex-pdp | retries = 2147483647 16:15:32 policy-apex-pdp | retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.client.callback.handler.class = null 16:15:32 policy-apex-pdp | sasl.jaas.config = null 16:15:32 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-apex-pdp | sasl.kerberos.service.name = null 16:15:32 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 policy-apex-pdp | sasl.login.callback.handler.class = null 16:15:32 policy-apex-pdp | sasl.login.class = null 16:15:32 policy-apex-pdp | sasl.login.connect.timeout.ms = null 16:15:32 policy-apex-pdp | sasl.login.read.timeout.ms = null 16:15:32 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 16:15:32 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.mechanism = GSSAPI 16:15:32 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 16:15:32 kafka | [2024-03-20 16:13:37,069] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,071] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,072] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.034659435Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.258778ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.040093973Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 16:15:32 policy-pap | [2024-03-20T16:13:36.205+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68ecf45c-a300-46bc-a23d-d6558b61d71d, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:15:32 policy-pap | [2024-03-20T16:13:36.205+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=cd3571d2-bf35-4e38-b6c0-741ea8425298, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 16:15:32 policy-pap | [2024-03-20T16:13:36.205+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bfcf1d53-3209-4f71-99a2-bbd5c3c7ac79, alive=false, publisher=null]]: starting 16:15:32 policy-pap | [2024-03-20T16:13:36.227+00:00|INFO|ProducerConfig|main] ProducerConfig values: 16:15:32 policy-pap | acks = -1 16:15:32 policy-pap | auto.include.jmx.reporter = true 16:15:32 policy-pap | batch.size = 16384 16:15:32 policy-pap | bootstrap.servers = [kafka:9092] 16:15:32 policy-pap | buffer.memory = 33554432 16:15:32 policy-pap | client.dns.lookup = use_all_dns_ips 16:15:32 policy-pap | client.id = producer-1 16:15:32 policy-pap | compression.type = none 16:15:32 policy-pap | connections.max.idle.ms = 540000 16:15:32 policy-pap | delivery.timeout.ms = 120000 16:15:32 policy-pap | enable.idempotence = true 16:15:32 policy-pap | interceptor.classes = [] 16:15:32 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:15:32 policy-pap | linger.ms = 0 16:15:32 policy-pap | max.block.ms = 60000 16:15:32 policy-pap | max.in.flight.requests.per.connection = 5 16:15:32 policy-pap | max.request.size = 1048576 16:15:32 policy-pap | metadata.max.age.ms = 300000 16:15:32 policy-pap | metadata.max.idle.ms = 300000 16:15:32 policy-pap | metric.reporters = [] 16:15:32 policy-pap | metrics.num.samples = 2 16:15:32 policy-pap | metrics.recording.level = INFO 16:15:32 policy-pap | metrics.sample.window.ms = 30000 16:15:32 policy-pap | partitioner.adaptive.partitioning.enable = true 16:15:32 policy-pap | partitioner.availability.timeout.ms = 0 16:15:32 policy-pap | partitioner.class = null 16:15:32 policy-pap | partitioner.ignore.keys = false 16:15:32 policy-pap | receive.buffer.bytes = 32768 16:15:32 policy-pap | reconnect.backoff.max.ms = 1000 16:15:32 policy-pap | reconnect.backoff.ms = 50 16:15:32 policy-pap | request.timeout.ms = 30000 16:15:32 policy-pap | retries = 2147483647 16:15:32 policy-pap | retry.backoff.ms = 100 16:15:32 policy-pap | sasl.client.callback.handler.class = null 16:15:32 policy-pap | sasl.jaas.config = null 16:15:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 policy-apex-pdp | security.protocol = PLAINTEXT 16:15:32 policy-apex-pdp | security.providers = null 16:15:32 policy-apex-pdp | send.buffer.bytes = 131072 16:15:32 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-apex-pdp | ssl.cipher.suites = null 16:15:32 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 16:15:32 policy-apex-pdp | ssl.engine.factory.class = null 16:15:32 policy-apex-pdp | ssl.key.password = null 16:15:32 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 16:15:32 policy-apex-pdp | ssl.keystore.certificate.chain = null 16:15:32 policy-apex-pdp | ssl.keystore.key = null 16:15:32 policy-apex-pdp | ssl.keystore.location = null 16:15:32 policy-apex-pdp | ssl.keystore.password = null 16:15:32 policy-apex-pdp | ssl.keystore.type = JKS 16:15:32 policy-apex-pdp | ssl.protocol = TLSv1.3 16:15:32 policy-apex-pdp | ssl.provider = null 16:15:32 policy-apex-pdp | ssl.secure.random.implementation = null 16:15:32 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 16:15:32 policy-apex-pdp | ssl.truststore.certificates = null 16:15:32 policy-apex-pdp | ssl.truststore.location = null 16:15:32 policy-apex-pdp | ssl.truststore.password = null 16:15:32 policy-apex-pdp | ssl.truststore.type = JKS 16:15:32 policy-apex-pdp | transaction.timeout.ms = 60000 16:15:32 policy-apex-pdp | transactional.id = null 16:15:32 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:15:32 policy-apex-pdp | 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.453+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.472+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.472+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.472+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951217471 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.472+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d606158a-bd1e-4eca-9278-d85f03ab1cd0, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.472+00:00|INFO|ServiceManager|main] service manager starting set alive 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.472+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.477+00:00|INFO|ServiceManager|main] service manager starting topic sinks 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.477+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.479+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.479+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.479+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.479+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f5e0a58-910b-431c-bb29-e00354420c7f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.480+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f5e0a58-910b-431c-bb29-e00354420c7f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.480+00:00|INFO|ServiceManager|main] service manager starting Create REST server 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.517+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 16:15:32 policy-apex-pdp | [] 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.519+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a24b3168-7ab2-431f-a5a5-730db9dc93dc","timestampMs":1710951217482,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.04614253Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.046547ms 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.kerberos.service.name = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.697+00:00|INFO|ServiceManager|main] service manager starting Rest Server 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.050931248Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.697+00:00|INFO|ServiceManager|main] service manager starting 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.055182739Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.251771ms 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.059602982Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 16:15:32 policy-pap | sasl.login.callback.handler.class = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.697+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.login.class = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.697+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.059677373Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=75.461µs 16:15:32 policy-pap | sasl.login.connect.timeout.ms = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.710+00:00|INFO|ServiceManager|main] service manager started 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.login.read.timeout.ms = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.710+00:00|INFO|ServiceManager|main] service manager started 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.066105755Z level=info msg="Executing migration" id="create alert_rule_version table" 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.711+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.067551086Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.444481ms 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.071713085Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 16:15:32 kafka | [2024-03-20 16:13:37,073] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.window.factor = 0.8 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.713+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.072861682Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.149457ms 16:15:32 kafka | [2024-03-20 16:13:37,073] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.933+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: rOCUpPXHS92ZRRxFHWBetw 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.077201424Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 16:15:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.933+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Cluster ID: rOCUpPXHS92ZRRxFHWBetw 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.078423671Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.223497ms 16:15:32 kafka | [2024-03-20 16:13:37,079] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 16:15:32 policy-pap | sasl.login.retry.backoff.ms = 100 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.934+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.084436537Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 16:15:32 kafka | [2024-03-20 16:13:37,080] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | sasl.mechanism = GSSAPI 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.935+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.084513728Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=82.281µs 16:15:32 kafka | [2024-03-20 16:13:37,080] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.941+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] (Re-)joining group 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.087936997Z level=info msg="Executing migration" id="add column for to alert_rule_version" 16:15:32 kafka | [2024-03-20 16:13:37,080] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.expected.audience = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Request joining group due to: need to re-join with the given member-id: consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2-a4cfeb9b-05bc-4cc7-acd4-28b5f4173a72 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.092371681Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.434844ms 16:15:32 kafka | [2024-03-20 16:13:37,080] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.096396628Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 16:15:32 policy-pap | sasl.oauthbearer.expected.issuer = null 16:15:32 kafka | [2024-03-20 16:13:37,080] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:37.966+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] (Re-)joining group 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.102953312Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.556504ms 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.107962464Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 policy-apex-pdp | [2024-03-20T16:13:38.451+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.112357037Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.394433ms 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 policy-apex-pdp | [2024-03-20T16:13:38.453+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.115526542Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:40.984+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2-a4cfeb9b-05bc-4cc7-acd4-28b5f4173a72', protocol='range'} 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.119939895Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.414543ms 16:15:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:15:32 policy-apex-pdp | [2024-03-20T16:13:40.991+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Finished assignment for group at generation 1: {consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2-a4cfeb9b-05bc-4cc7-acd4-28b5f4173a72=Assignment(partitions=[policy-pdp-pap-0])} 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.123334144Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 16:15:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:15:32 policy-apex-pdp | [2024-03-20T16:13:41.015+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2-a4cfeb9b-05bc-4cc7-acd4-28b5f4173a72', protocol='range'} 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.129855857Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.520033ms 16:15:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:41.016+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.171689307Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 16:15:32 policy-pap | security.protocol = PLAINTEXT 16:15:32 policy-apex-pdp | [2024-03-20T16:13:41.018+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Adding newly assigned partitions: policy-pdp-pap-0 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.171836579Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=136.722µs 16:15:32 policy-pap | security.providers = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:41.032+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Found no committed offset for partition policy-pdp-pap-0 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.175714415Z level=info msg="Executing migration" id=create_alert_configuration_table 16:15:32 policy-pap | send.buffer.bytes = 131072 16:15:32 policy-apex-pdp | [2024-03-20T16:13:41.065+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2, groupId=2f5e0a58-910b-431c-bb29-e00354420c7f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.177010514Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.295599ms 16:15:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:15:32 policy-apex-pdp | [2024-03-20T16:13:56.159+00:00|INFO|RequestLog|qtp1068445309-32] 172.17.0.2 - policyadmin [20/Mar/2024:16:13:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/2.51.0" 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.180760378Z level=info msg="Executing migration" id="Add column default in alert_configuration" 16:15:32 policy-pap | socket.connection.setup.timeout.ms = 10000 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.479+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.201544817Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=20.784129ms 16:15:32 policy-pap | ssl.cipher.suites = null 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6ad7682-c93c-4197-b802-decc7b740c1d","timestampMs":1710951237479,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.206690781Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.501+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.206738602Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=48.291µs 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.endpoint.identification.algorithm = https 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6ad7682-c93c-4197-b802-decc7b740c1d","timestampMs":1710951237479,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.210235812Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.engine.factory.class = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.503+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.21697009Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.731308ms 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.key.password = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.662+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.221379173Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 16:15:32 kafka | [2024-03-20 16:13:37,081] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.keymanager.algorithm = SunX509 16:15:32 policy-apex-pdp | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a504a2da-8d70-43be-80c3-3972a17aabb4","timestampMs":1710951237604,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.222087073Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=707.89µs 16:15:32 kafka | [2024-03-20 16:13:37,083] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.keystore.certificate.chain = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.671+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.229229076Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 16:15:32 kafka | [2024-03-20 16:13:37,083] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.keystore.key = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.671+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.238294327Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=9.070201ms 16:15:32 kafka | [2024-03-20 16:13:37,083] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.keystore.location = null 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"adb11c3d-246a-4a9c-8f75-f586a12c0bc8","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.262067309Z level=info msg="Executing migration" id=create_ngalert_configuration_table 16:15:32 kafka | [2024-03-20 16:13:37,083] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.keystore.password = null 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.672+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.263284827Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.217348ms 16:15:32 kafka | [2024-03-20 16:13:37,083] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | ssl.keystore.type = JKS 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a504a2da-8d70-43be-80c3-3972a17aabb4","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"fd738cf8-6de6-469d-94de-5a85fd583f32","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,083] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.682+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.267202593Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 16:15:32 policy-pap | ssl.protocol = TLSv1.3 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"adb11c3d-246a-4a9c-8f75-f586a12c0bc8","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.268806066Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.604903ms 16:15:32 policy-pap | ssl.provider = null 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.682+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.272793154Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 16:15:32 policy-pap | ssl.secure.random.implementation = null 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.682+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.27948042Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.686856ms 16:15:32 policy-pap | ssl.trustmanager.algorithm = PKIX 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a504a2da-8d70-43be-80c3-3972a17aabb4","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"fd738cf8-6de6-469d-94de-5a85fd583f32","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.285048641Z level=info msg="Executing migration" id="create provenance_type table" 16:15:32 policy-pap | ssl.truststore.certificates = null 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.683+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.285885203Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=836.132µs 16:15:32 policy-pap | ssl.truststore.location = null 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.734+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.304162106Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","timestampMs":1710951237605,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 policy-pap | ssl.truststore.password = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.305750979Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.588683ms 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.738+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 16:15:32 policy-pap | ssl.truststore.type = JKS 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.316095308Z level=info msg="Executing migration" id="create alert_image table" 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7784a956-4250-4072-b6c4-e9015ed0fd04","timestampMs":1710951237737,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 policy-pap | transaction.timeout.ms = 60000 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.747+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 policy-pap | transactional.id = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.317326596Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.230808ms 16:15:32 kafka | [2024-03-20 16:13:37,084] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7784a956-4250-4072-b6c4-e9015ed0fd04","timestampMs":1710951237737,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.323748638Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.747+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:15:32 policy-pap | 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.32525545Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.506312ms 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.784+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 policy-pap | [2024-03-20T16:13:36.240+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.330559636Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","timestampMs":1710951237757,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 policy-pap | [2024-03-20T16:13:36.272+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.330655688Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=99.841µs 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.787+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 16:15:32 policy-pap | [2024-03-20T16:13:36.272+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.336103016Z level=info msg="Executing migration" id=create_alert_configuration_history_table 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ff9cb3d2-ec93-4a4a-bb0d-3132bba09d00","timestampMs":1710951237786,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 policy-pap | [2024-03-20T16:13:36.272+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951216271 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.337862781Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.760995ms 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.796+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 policy-pap | [2024-03-20T16:13:36.273+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bfcf1d53-3209-4f71-99a2-bbd5c3c7ac79, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.344084371Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ff9cb3d2-ec93-4a4a-bb0d-3132bba09d00","timestampMs":1710951237786,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 policy-pap | [2024-03-20T16:13:36.273+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d81c21d2-9fdd-4775-a391-be53f7d3c17e, alive=false, publisher=null]]: starting 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.345190547Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.104706ms 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:13:57.796+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 16:15:32 policy-pap | [2024-03-20T16:13:36.274+00:00|INFO|ProducerConfig|main] ProducerConfig values: 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.351193234Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-apex-pdp | [2024-03-20T16:14:56.084+00:00|INFO|RequestLog|qtp1068445309-29] 172.17.0.2 - policyadmin [20/Mar/2024:16:14:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.51.0" 16:15:32 policy-pap | acks = -1 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.351824583Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | auto.include.jmx.reporter = true 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.357658217Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | batch.size = 16384 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.358437518Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=778.822µs 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | bootstrap.servers = [kafka:9092] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.363887156Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 16:15:32 kafka | [2024-03-20 16:13:37,085] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 16:15:32 policy-pap | buffer.memory = 33554432 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.364994592Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.107546ms 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 16:15:32 policy-pap | client.dns.lookup = use_all_dns_ips 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.371888462Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 16:15:32 policy-pap | client.id = producer-2 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.379236438Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.328875ms 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 16:15:32 policy-pap | compression.type = none 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.389016418Z level=info msg="Executing migration" id="create library_element table v1" 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 16:15:32 policy-pap | connections.max.idle.ms = 540000 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.389889631Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=871.393µs 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 16:15:32 policy-pap | delivery.timeout.ms = 120000 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.39397174Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 16:15:32 policy-pap | enable.idempotence = true 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.395639874Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.667784ms 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 16:15:32 policy-pap | interceptor.classes = [] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.400808478Z level=info msg="Executing migration" id="create library_element_connection table v1" 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 16:15:32 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.401694661Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=886.373µs 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 16:15:32 policy-pap | linger.ms = 0 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.405407545Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 16:15:32 policy-pap | max.block.ms = 60000 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.406603182Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.195697ms 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 16:15:32 policy-pap | max.in.flight.requests.per.connection = 5 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.409978161Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 16:15:32 policy-pap | max.request.size = 1048576 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.411144568Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.165406ms 16:15:32 policy-pap | metadata.max.age.ms = 300000 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.417088143Z level=info msg="Executing migration" id="increase max description length to 2048" 16:15:32 policy-pap | metadata.max.idle.ms = 300000 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.417128083Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=41.31µs 16:15:32 policy-pap | metric.reporters = [] 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.42039111Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 16:15:32 policy-pap | metrics.num.samples = 2 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.420580663Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=188.923µs 16:15:32 policy-pap | metrics.recording.level = INFO 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.424573481Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 16:15:32 policy-pap | metrics.sample.window.ms = 30000 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.424976837Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=404.076µs 16:15:32 policy-pap | partitioner.adaptive.partitioning.enable = true 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.429327669Z level=info msg="Executing migration" id="create data_keys table" 16:15:32 policy-pap | partitioner.availability.timeout.ms = 0 16:15:32 kafka | [2024-03-20 16:13:37,126] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.430383825Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.053505ms 16:15:32 policy-pap | partitioner.class = null 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.435807613Z level=info msg="Executing migration" id="create secrets table" 16:15:32 policy-pap | partitioner.ignore.keys = false 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.436684135Z level=info msg="Migration successfully executed" id="create secrets table" duration=876.242µs 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.441633187Z level=info msg="Executing migration" id="rename data_keys name column to id" 16:15:32 policy-pap | receive.buffer.bytes = 32768 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.472689444Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=31.053677ms 16:15:32 policy-pap | reconnect.backoff.max.ms = 1000 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.476365947Z level=info msg="Executing migration" id="add name column into data_keys" 16:15:32 policy-pap | reconnect.backoff.ms = 50 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.485740742Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.373545ms 16:15:32 policy-pap | request.timeout.ms = 30000 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.490992418Z level=info msg="Executing migration" id="copy data_keys id column values into name" 16:15:32 policy-pap | retries = 2147483647 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.491105509Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=113.661µs 16:15:32 policy-pap | retry.backoff.ms = 100 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.494835273Z level=info msg="Executing migration" id="rename data_keys name column to label" 16:15:32 policy-pap | sasl.client.callback.handler.class = null 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.531924388Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=37.051074ms 16:15:32 policy-pap | sasl.jaas.config = null 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.535495229Z level=info msg="Executing migration" id="rename data_keys id column back to name" 16:15:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.567113914Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=31.614505ms 16:15:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.572111376Z level=info msg="Executing migration" id="create kv_store table v1" 16:15:32 policy-pap | sasl.kerberos.service.name = null 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.572793516Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=683.53µs 16:15:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.578422157Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 16:15:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 16:15:32 policy-pap | sasl.login.callback.handler.class = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.579647645Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.225578ms 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 16:15:32 policy-pap | sasl.login.class = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.586373282Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 16:15:32 policy-pap | sasl.login.connect.timeout.ms = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.586624226Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=253.324µs 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 16:15:32 policy-pap | sasl.login.read.timeout.ms = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.590843217Z level=info msg="Executing migration" id="create permission table" 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.591950233Z level=info msg="Migration successfully executed" id="create permission table" duration=1.107216ms 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.595218179Z level=info msg="Executing migration" id="add unique index permission.role_id" 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.window.factor = 0.8 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.59667761Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.458431ms 16:15:32 kafka | [2024-03-20 16:13:37,127] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 16:15:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.600266622Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 16:15:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.601898186Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.631524ms 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 16:15:32 policy-pap | sasl.login.retry.backoff.ms = 100 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.606572933Z level=info msg="Executing migration" id="create role table" 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 16:15:32 policy-pap | sasl.mechanism = GSSAPI 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.607706449Z level=info msg="Migration successfully executed" id="create role table" duration=1.137266ms 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.613634285Z level=info msg="Executing migration" id="add column display_name" 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.expected.audience = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.621847643Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.211028ms 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.expected.issuer = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.625748199Z level=info msg="Executing migration" id="add column group_name" 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.631100266Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.348947ms 16:15:32 kafka | [2024-03-20 16:13:37,128] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.635886285Z level=info msg="Executing migration" id="add index role.org_id" 16:15:32 kafka | [2024-03-20 16:13:37,129] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.637089483Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.208298ms 16:15:32 kafka | [2024-03-20 16:13:37,130] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 16:15:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.640716705Z level=info msg="Executing migration" id="add unique index role_org_id_name" 16:15:32 kafka | [2024-03-20 16:13:37,209] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.642481311Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.765175ms 16:15:32 kafka | [2024-03-20 16:13:37,219] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.646009921Z level=info msg="Executing migration" id="add index role_org_id_uid" 16:15:32 kafka | [2024-03-20 16:13:37,221] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 16:15:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.647782787Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.774296ms 16:15:32 kafka | [2024-03-20 16:13:37,222] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | security.protocol = PLAINTEXT 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.65212957Z level=info msg="Executing migration" id="create team role table" 16:15:32 kafka | [2024-03-20 16:13:37,223] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | security.providers = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.653191975Z level=info msg="Migration successfully executed" id="create team role table" duration=1.062515ms 16:15:32 kafka | [2024-03-20 16:13:37,236] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | send.buffer.bytes = 131072 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.656395391Z level=info msg="Executing migration" id="add index team_role.org_id" 16:15:32 kafka | [2024-03-20 16:13:37,237] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.657591628Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.198197ms 16:15:32 kafka | [2024-03-20 16:13:37,237] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 16:15:32 policy-pap | socket.connection.setup.timeout.ms = 10000 16:15:32 kafka | [2024-03-20 16:13:37,237] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.661588446Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 16:15:32 policy-pap | ssl.cipher.suites = null 16:15:32 kafka | [2024-03-20 16:13:37,237] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.662762123Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.173297ms 16:15:32 kafka | [2024-03-20 16:13:37,251] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | ssl.endpoint.identification.algorithm = https 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.667059665Z level=info msg="Executing migration" id="add index team_role.team_id" 16:15:32 kafka | [2024-03-20 16:13:37,252] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | ssl.engine.factory.class = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.667879276Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=832.682µs 16:15:32 kafka | [2024-03-20 16:13:37,252] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 16:15:32 policy-pap | ssl.key.password = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.670848419Z level=info msg="Executing migration" id="create user role table" 16:15:32 kafka | [2024-03-20 16:13:37,252] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | ssl.keymanager.algorithm = SunX509 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.67156061Z level=info msg="Migration successfully executed" id="create user role table" duration=711.89µs 16:15:32 kafka | [2024-03-20 16:13:37,252] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | ssl.keystore.certificate.chain = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.676064354Z level=info msg="Executing migration" id="add index user_role.org_id" 16:15:32 kafka | [2024-03-20 16:13:37,263] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | ssl.keystore.key = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.676894506Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=830.492µs 16:15:32 kafka | [2024-03-20 16:13:37,264] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | ssl.keystore.location = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.679985531Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 16:15:32 kafka | [2024-03-20 16:13:37,264] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 16:15:32 policy-pap | ssl.keystore.password = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.680798103Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=812.442µs 16:15:32 kafka | [2024-03-20 16:13:37,264] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | ssl.keystore.type = JKS 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.718933212Z level=info msg="Executing migration" id="add index user_role.user_id" 16:15:32 kafka | [2024-03-20 16:13:37,265] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | ssl.protocol = TLSv1.3 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.719918236Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=987.224µs 16:15:32 kafka | [2024-03-20 16:13:37,273] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | ssl.provider = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.724740936Z level=info msg="Executing migration" id="create builtin role table" 16:15:32 kafka | [2024-03-20 16:13:37,273] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | ssl.secure.random.implementation = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.726198437Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.457201ms 16:15:32 kafka | [2024-03-20 16:13:37,273] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 16:15:32 policy-pap | ssl.trustmanager.algorithm = PKIX 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.729666677Z level=info msg="Executing migration" id="add index builtin_role.role_id" 16:15:32 kafka | [2024-03-20 16:13:37,273] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | ssl.truststore.certificates = null 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.730879154Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.214877ms 16:15:32 policy-pap | ssl.truststore.location = null 16:15:32 kafka | [2024-03-20 16:13:37,273] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.73406159Z level=info msg="Executing migration" id="add index builtin_role.name" 16:15:32 policy-pap | ssl.truststore.password = null 16:15:32 kafka | [2024-03-20 16:13:37,281] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.735273588Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.211958ms 16:15:32 policy-pap | ssl.truststore.type = JKS 16:15:32 kafka | [2024-03-20 16:13:37,281] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.74030866Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 16:15:32 policy-pap | transaction.timeout.ms = 60000 16:15:32 kafka | [2024-03-20 16:13:37,281] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.748766382Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.435891ms 16:15:32 policy-pap | transactional.id = null 16:15:32 kafka | [2024-03-20 16:13:37,281] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.752290273Z level=info msg="Executing migration" id="add index builtin_role.org_id" 16:15:32 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 16:15:32 kafka | [2024-03-20 16:13:37,281] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.753588311Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.299879ms 16:15:32 policy-pap | 16:15:32 kafka | [2024-03-20 16:13:37,287] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.756653665Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 16:15:32 policy-pap | [2024-03-20T16:13:36.275+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 16:15:32 kafka | [2024-03-20 16:13:37,288] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.757884093Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.230168ms 16:15:32 policy-pap | [2024-03-20T16:13:36.281+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 16:15:32 kafka | [2024-03-20 16:13:37,288] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.762595411Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 16:15:32 policy-pap | [2024-03-20T16:13:36.281+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 16:15:32 kafka | [2024-03-20 16:13:37,288] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.763806769Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.213408ms 16:15:32 policy-pap | [2024-03-20T16:13:36.281+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710951216280 16:15:32 kafka | [2024-03-20 16:13:37,288] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.76668456Z level=info msg="Executing migration" id="add unique index role.uid" 16:15:32 policy-pap | [2024-03-20T16:13:36.281+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d81c21d2-9fdd-4775-a391-be53f7d3c17e, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 16:15:32 kafka | [2024-03-20 16:13:37,299] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.767914838Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.230258ms 16:15:32 policy-pap | [2024-03-20T16:13:36.281+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 16:15:32 kafka | [2024-03-20 16:13:37,299] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:36.281+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.771226286Z level=info msg="Executing migration" id="create seed assignment table" 16:15:32 kafka | [2024-03-20 16:13:37,301] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.288+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.772170309Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=943.603µs 16:15:32 kafka | [2024-03-20 16:13:37,301] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.288+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.776930887Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 16:15:32 kafka | [2024-03-20 16:13:37,301] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:36.290+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.778181176Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.250028ms 16:15:32 kafka | [2024-03-20 16:13:37,309] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:36.291+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.781390022Z level=info msg="Executing migration" id="add column hidden to role table" 16:15:32 kafka | [2024-03-20 16:13:37,310] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:36.305+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.789878734Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.487022ms 16:15:32 kafka | [2024-03-20 16:13:37,310] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.307+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.793238753Z level=info msg="Executing migration" id="permission kind migration" 16:15:32 kafka | [2024-03-20 16:13:37,310] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.309+00:00|INFO|ServiceManager|main] Policy PAP started 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.800877603Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.63904ms 16:15:32 kafka | [2024-03-20 16:13:37,310] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:36.309+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.806108768Z level=info msg="Executing migration" id="permission attribute migration" 16:15:32 kafka | [2024-03-20 16:13:37,318] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:36.312+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.582 seconds (process running for 11.248) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.811675008Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.56566ms 16:15:32 kafka | [2024-03-20 16:13:37,319] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:36.312+00:00|INFO|TimerManager|Thread-9] timer manager update started 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.814641081Z level=info msg="Executing migration" id="permission identifier migration" 16:15:32 kafka | [2024-03-20 16:13:37,319] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.805+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.823013482Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.371961ms 16:15:32 kafka | [2024-03-20 16:13:37,319] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.810+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: rOCUpPXHS92ZRRxFHWBetw 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.826209358Z level=info msg="Executing migration" id="add permission identifier index" 16:15:32 kafka | [2024-03-20 16:13:37,319] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:36.810+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: rOCUpPXHS92ZRRxFHWBetw 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.827532137Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.322709ms 16:15:32 kafka | [2024-03-20 16:13:37,326] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:36.810+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: rOCUpPXHS92ZRRxFHWBetw 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.830423899Z level=info msg="Executing migration" id="add permission action scope role_id index" 16:15:32 kafka | [2024-03-20 16:13:37,326] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:36.840+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.831774308Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.350199ms 16:15:32 kafka | [2024-03-20 16:13:37,326] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.847+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Cluster ID: rOCUpPXHS92ZRRxFHWBetw 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.836703999Z level=info msg="Executing migration" id="remove permission role_id action scope index" 16:15:32 kafka | [2024-03-20 16:13:37,326] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:36.907+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.837884446Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.180357ms 16:15:32 kafka | [2024-03-20 16:13:37,326] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:36.931+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.845421104Z level=info msg="Executing migration" id="create query_history table v1" 16:15:32 kafka | [2024-03-20 16:13:37,333] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:36.932+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.846841235Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.422971ms 16:15:32 kafka | [2024-03-20 16:13:37,333] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:36.972+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.851802736Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 16:15:32 kafka | [2024-03-20 16:13:37,333] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.033+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.853666563Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.863847ms 16:15:32 policy-pap | [2024-03-20T16:13:37.081+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 kafka | [2024-03-20 16:13:37,333] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.857280435Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 16:15:32 policy-pap | [2024-03-20T16:13:37.140+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.857440498Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=159.853µs 16:15:32 kafka | [2024-03-20 16:13:37,333] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:37.190+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.859850413Z level=info msg="Executing migration" id="rbac disabled migrator" 16:15:32 kafka | [2024-03-20 16:13:37,342] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:37.253+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.859977314Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=127.731µs 16:15:32 kafka | [2024-03-20 16:13:37,343] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:37.296+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.863309982Z level=info msg="Executing migration" id="teams permissions migration" 16:15:32 kafka | [2024-03-20 16:13:37,343] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.368+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.864242376Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=904.813µs 16:15:32 kafka | [2024-03-20 16:13:37,343] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.402+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.869524222Z level=info msg="Executing migration" id="dashboard permissions" 16:15:32 kafka | [2024-03-20 16:13:37,343] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:37.475+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.870559787Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.037155ms 16:15:32 kafka | [2024-03-20 16:13:37,349] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:37.506+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.874513164Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 16:15:32 kafka | [2024-03-20 16:13:37,349] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:37.581+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.875290495Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=777.361µs 16:15:32 kafka | [2024-03-20 16:13:37,349] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.618+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.87843088Z level=info msg="Executing migration" id="drop managed folder create actions" 16:15:32 kafka | [2024-03-20 16:13:37,349] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.686+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.878751805Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=318.555µs 16:15:32 kafka | [2024-03-20 16:13:37,350] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:37.730+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.882048622Z level=info msg="Executing migration" id="alerting notification permissions" 16:15:32 kafka | [2024-03-20 16:13:37,368] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:37.793+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.882629181Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=580.669µs 16:15:32 kafka | [2024-03-20 16:13:37,369] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:37.842+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.888437294Z level=info msg="Executing migration" id="create query_history_star table v1" 16:15:32 kafka | [2024-03-20 16:13:37,369] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.850+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.889750883Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.312359ms 16:15:32 kafka | [2024-03-20 16:13:37,369] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.858+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] (Re-)joining group 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.893571528Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 16:15:32 kafka | [2024-03-20 16:13:37,369] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:37.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Request joining group due to: need to re-join with the given member-id: consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3-6979a0d6-81ba-4972-a608-c19ab7cdc18b 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.895450225Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.878487ms 16:15:32 kafka | [2024-03-20 16:13:37,377] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:37.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.898625651Z level=info msg="Executing migration" id="add column org_id in query_history_star" 16:15:32 kafka | [2024-03-20 16:13:37,377] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:37.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] (Re-)joining group 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.907125584Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.498602ms 16:15:32 kafka | [2024-03-20 16:13:37,377] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.900+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.912956518Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 16:15:32 kafka | [2024-03-20 16:13:37,377] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:37.903+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.91312553Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=168.962µs 16:15:32 kafka | [2024-03-20 16:13:37,377] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:37.906+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-6c72e110-e158-4eb3-95a3-9ff5ed05d348 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.916690341Z level=info msg="Executing migration" id="create correlation table v1" 16:15:32 kafka | [2024-03-20 16:13:37,385] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:37.906+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.91795064Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.257509ms 16:15:32 kafka | [2024-03-20 16:13:37,385] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:37.907+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.924424163Z level=info msg="Executing migration" id="add index correlations.uid" 16:15:32 kafka | [2024-03-20 16:13:37,385] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:40.916+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Successfully joined group with generation Generation{generationId=1, memberId='consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3-6979a0d6-81ba-4972-a608-c19ab7cdc18b', protocol='range'} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.925718781Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.294028ms 16:15:32 kafka | [2024-03-20 16:13:37,385] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:40.921+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c72e110-e158-4eb3-95a3-9ff5ed05d348', protocol='range'} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.932364657Z level=info msg="Executing migration" id="add index correlations.source_uid" 16:15:32 kafka | [2024-03-20 16:13:37,385] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:40.922+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Finished assignment for group at generation 1: {consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3-6979a0d6-81ba-4972-a608-c19ab7cdc18b=Assignment(partitions=[policy-pdp-pap-0])} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.933560674Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.196407ms 16:15:32 kafka | [2024-03-20 16:13:37,393] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:40.922+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-6c72e110-e158-4eb3-95a3-9ff5ed05d348=Assignment(partitions=[policy-pdp-pap-0])} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.936920503Z level=info msg="Executing migration" id="add correlation config column" 16:15:32 kafka | [2024-03-20 16:13:37,394] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:40.955+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c72e110-e158-4eb3-95a3-9ff5ed05d348', protocol='range'} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.947658738Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.728055ms 16:15:32 kafka | [2024-03-20 16:13:37,394] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:40.956+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 16:15:32 kafka | [2024-03-20 16:13:37,394] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.952030721Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 16:15:32 policy-pap | [2024-03-20T16:13:40.963+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 16:15:32 kafka | [2024-03-20 16:13:37,394] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.953889147Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.858946ms 16:15:32 policy-pap | [2024-03-20T16:13:40.964+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Successfully synced group in generation Generation{generationId=1, memberId='consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3-6979a0d6-81ba-4972-a608-c19ab7cdc18b', protocol='range'} 16:15:32 kafka | [2024-03-20 16:13:37,405] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.958993151Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 16:15:32 policy-pap | [2024-03-20T16:13:40.965+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 16:15:32 kafka | [2024-03-20 16:13:37,406] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.960162668Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.169787ms 16:15:32 policy-pap | [2024-03-20T16:13:40.965+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Adding newly assigned partitions: policy-pdp-pap-0 16:15:32 kafka | [2024-03-20 16:13:37,406] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.963308153Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 16:15:32 policy-pap | [2024-03-20T16:13:41.006+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Found no committed offset for partition policy-pdp-pap-0 16:15:32 kafka | [2024-03-20 16:13:37,406] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.985237489Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.929336ms 16:15:32 policy-pap | [2024-03-20T16:13:41.007+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 16:15:32 kafka | [2024-03-20 16:13:37,406] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.987847196Z level=info msg="Executing migration" id="create correlation v2" 16:15:32 policy-pap | [2024-03-20T16:13:41.024+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3, groupId=cd3571d2-bf35-4e38-b6c0-741ea8425298] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 16:15:32 kafka | [2024-03-20 16:13:37,415] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.988739479Z level=info msg="Migration successfully executed" id="create correlation v2" duration=891.893µs 16:15:32 policy-pap | [2024-03-20T16:13:41.024+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 16:15:32 kafka | [2024-03-20 16:13:37,415] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.994098707Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 16:15:32 policy-pap | [2024-03-20T16:13:41.594+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 16:15:32 kafka | [2024-03-20 16:13:37,415] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.995260243Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.162206ms 16:15:32 policy-pap | [2024-03-20T16:13:41.594+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 16:15:32 kafka | [2024-03-20 16:13:37,415] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.998433409Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 16:15:32 policy-pap | [2024-03-20T16:13:41.600+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 5 ms 16:15:32 kafka | [2024-03-20 16:13:37,415] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:04.999694537Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.261028ms 16:15:32 policy-pap | [2024-03-20T16:13:57.520+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 16:15:32 kafka | [2024-03-20 16:13:37,424] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.007949928Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 16:15:32 policy-pap | [] 16:15:32 kafka | [2024-03-20 16:13:37,424] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.009244107Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.294189ms 16:15:32 policy-pap | [2024-03-20T16:13:57.521+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,424] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.016349381Z level=info msg="Executing migration" id="copy correlation v1 to v2" 16:15:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6ad7682-c93c-4197-b802-decc7b740c1d","timestampMs":1710951237479,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 kafka | [2024-03-20 16:13:37,424] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.016707256Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=358.205µs 16:15:32 policy-pap | [2024-03-20T16:13:57.521+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 kafka | [2024-03-20 16:13:37,425] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.020187337Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 16:15:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d6ad7682-c93c-4197-b802-decc7b740c1d","timestampMs":1710951237479,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 kafka | [2024-03-20 16:13:37,433] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.021503527Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.315459ms 16:15:32 policy-pap | [2024-03-20T16:13:57.530+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 16:15:32 kafka | [2024-03-20 16:13:37,434] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.029680276Z level=info msg="Executing migration" id="add provisioning column" 16:15:32 policy-pap | [2024-03-20T16:13:57.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting 16:15:32 kafka | [2024-03-20 16:13:37,434] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.037970978Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.315342ms 16:15:32 policy-pap | [2024-03-20T16:13:57.620+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting listener 16:15:32 kafka | [2024-03-20 16:13:37,434] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.041535301Z level=info msg="Executing migration" id="create entity_events table" 16:15:32 policy-pap | [2024-03-20T16:13:57.621+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting timer 16:15:32 kafka | [2024-03-20 16:13:37,434] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.042443714Z level=info msg="Migration successfully executed" id="create entity_events table" duration=908.453µs 16:15:32 policy-pap | [2024-03-20T16:13:57.621+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=a504a2da-8d70-43be-80c3-3972a17aabb4, expireMs=1710951267621] 16:15:32 kafka | [2024-03-20 16:13:37,444] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.046699627Z level=info msg="Executing migration" id="create dashboard public config v1" 16:15:32 policy-pap | [2024-03-20T16:13:57.623+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting enqueue 16:15:32 kafka | [2024-03-20 16:13:37,445] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.047847434Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.147507ms 16:15:32 policy-pap | [2024-03-20T16:13:57.623+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=a504a2da-8d70-43be-80c3-3972a17aabb4, expireMs=1710951267621] 16:15:32 kafka | [2024-03-20 16:13:37,446] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.051306004Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 16:15:32 policy-pap | [2024-03-20T16:13:57.624+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate started 16:15:32 kafka | [2024-03-20 16:13:37,446] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.051926074Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 16:15:32 policy-pap | [2024-03-20T16:13:57.626+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,446] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.055320634Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a504a2da-8d70-43be-80c3-3972a17aabb4","timestampMs":1710951237604,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,456] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.055912893Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 16:15:32 policy-pap | [2024-03-20T16:13:57.674+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 kafka | [2024-03-20 16:13:37,456] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.059535096Z level=info msg="Executing migration" id="Drop old dashboard public config table" 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a504a2da-8d70-43be-80c3-3972a17aabb4","timestampMs":1710951237604,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,456] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.060390608Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=852.562µs 16:15:32 policy-pap | [2024-03-20T16:13:57.674+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 16:15:32 kafka | [2024-03-20 16:13:37,456] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.066263185Z level=info msg="Executing migration" id="recreate dashboard public config v1" 16:15:32 policy-pap | [2024-03-20T16:13:57.676+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,456] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.067387751Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.123526ms 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a504a2da-8d70-43be-80c3-3972a17aabb4","timestampMs":1710951237604,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,473] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.070754981Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 16:15:32 policy-pap | [2024-03-20T16:13:57.677+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 16:15:32 kafka | [2024-03-20 16:13:37,474] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:37,474] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.07207522Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.262399ms 16:15:32 policy-pap | [2024-03-20T16:13:57.687+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 kafka | [2024-03-20 16:13:37,474] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.07613909Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 16:15:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"adb11c3d-246a-4a9c-8f75-f586a12c0bc8","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 kafka | [2024-03-20 16:13:37,474] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.077374998Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.235558ms 16:15:32 policy-pap | [2024-03-20T16:13:57.687+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,483] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.080646586Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 16:15:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"adb11c3d-246a-4a9c-8f75-f586a12c0bc8","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup"} 16:15:32 kafka | [2024-03-20 16:13:37,483] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.081828514Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.181838ms 16:15:32 policy-pap | [2024-03-20T16:13:57.688+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 16:15:32 kafka | [2024-03-20 16:13:37,483] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.085931664Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 16:15:32 policy-pap | [2024-03-20T16:13:57.688+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.087076511Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.144867ms 16:15:32 kafka | [2024-03-20 16:13:37,483] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a504a2da-8d70-43be-80c3-3972a17aabb4","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"fd738cf8-6de6-469d-94de-5a85fd583f32","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.091267503Z level=info msg="Executing migration" id="Drop public config table" 16:15:32 kafka | [2024-03-20 16:13:37,483] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:57.713+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.092163406Z level=info msg="Migration successfully executed" id="Drop public config table" duration=895.723µs 16:15:32 kafka | [2024-03-20 16:13:37,493] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a504a2da-8d70-43be-80c3-3972a17aabb4","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"fd738cf8-6de6-469d-94de-5a85fd583f32","timestampMs":1710951237671,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.095395843Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 16:15:32 kafka | [2024-03-20 16:13:37,493] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:57.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.096673342Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.277419ms 16:15:32 kafka | [2024-03-20 16:13:37,493] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping enqueue 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.103237199Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 16:15:32 kafka | [2024-03-20 16:13:37,493] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.713+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping timer 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.104964704Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.726015ms 16:15:32 kafka | [2024-03-20 16:13:37,494] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:57.714+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a504a2da-8d70-43be-80c3-3972a17aabb4, expireMs=1710951267621] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.108859061Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 16:15:32 kafka | [2024-03-20 16:13:37,500] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:57.714+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping listener 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.112506785Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=3.646184ms 16:15:32 kafka | [2024-03-20 16:13:37,501] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:57.714+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopped 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.116202019Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 16:15:32 kafka | [2024-03-20 16:13:37,501] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.714+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a504a2da-8d70-43be-80c3-3972a17aabb4 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.117404267Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.202398ms 16:15:32 kafka | [2024-03-20 16:13:37,501] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate successful 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.121740201Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 start publishing next request 16:15:32 kafka | [2024-03-20 16:13:37,501] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.145017883Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.278212ms 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange starting 16:15:32 kafka | [2024-03-20 16:13:37,511] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.182800749Z level=info msg="Executing migration" id="add annotations_enabled column" 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange starting listener 16:15:32 kafka | [2024-03-20 16:13:37,512] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.194345279Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.54953ms 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange starting timer 16:15:32 kafka | [2024-03-20 16:13:37,512] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.198271907Z level=info msg="Executing migration" id="add time_selection_enabled column" 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=f0c0319a-f2cf-4921-aa46-d1012f3c7fcc, expireMs=1710951267721] 16:15:32 kafka | [2024-03-20 16:13:37,512] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.210286234Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=12.013097ms 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange starting enqueue 16:15:32 kafka | [2024-03-20 16:13:37,512] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.213991508Z level=info msg="Executing migration" id="delete orphaned public dashboards" 16:15:32 policy-pap | [2024-03-20T16:13:57.721+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=f0c0319a-f2cf-4921-aa46-d1012f3c7fcc, expireMs=1710951267721] 16:15:32 kafka | [2024-03-20 16:13:37,520] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.214181621Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=190.553µs 16:15:32 policy-pap | [2024-03-20T16:13:57.722+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,520] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.218499494Z level=info msg="Executing migration" id="add share column" 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","timestampMs":1710951237605,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,520] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.224856748Z level=info msg="Migration successfully executed" id="add share column" duration=6.355744ms 16:15:32 policy-pap | [2024-03-20T16:13:57.723+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange started 16:15:32 kafka | [2024-03-20 16:13:37,520] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.245174477Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 16:15:32 policy-pap | [2024-03-20T16:13:57.733+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 kafka | [2024-03-20 16:13:37,520] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.245473701Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=303.984µs 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","timestampMs":1710951237605,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,527] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.252497045Z level=info msg="Executing migration" id="create file table" 16:15:32 policy-pap | [2024-03-20T16:13:57.733+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 16:15:32 kafka | [2024-03-20 16:13:37,527] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.253611131Z level=info msg="Migration successfully executed" id="create file table" duration=1.114297ms 16:15:32 policy-pap | [2024-03-20T16:13:57.748+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 kafka | [2024-03-20 16:13:37,527] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.257357116Z level=info msg="Executing migration" id="file table idx: path natural pk" 16:15:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7784a956-4250-4072-b6c4-e9015ed0fd04","timestampMs":1710951237737,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,528] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.259037371Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.680445ms 16:15:32 policy-pap | [2024-03-20T16:13:57.749+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f0c0319a-f2cf-4921-aa46-d1012f3c7fcc 16:15:32 kafka | [2024-03-20 16:13:37,528] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.262968418Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 16:15:32 policy-pap | [2024-03-20T16:13:57.765+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,534] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.264710454Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.741366ms 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","timestampMs":1710951237605,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,534] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.270401828Z level=info msg="Executing migration" id="create file_meta table" 16:15:32 policy-pap | [2024-03-20T16:13:57.765+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 16:15:32 kafka | [2024-03-20 16:13:37,534] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.27122435Z level=info msg="Migration successfully executed" id="create file_meta table" duration=821.832µs 16:15:32 policy-pap | [2024-03-20T16:13:57.768+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 kafka | [2024-03-20 16:13:37,534] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.282819991Z level=info msg="Executing migration" id="file table idx: path key" 16:15:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"f0c0319a-f2cf-4921-aa46-d1012f3c7fcc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"7784a956-4250-4072-b6c4-e9015ed0fd04","timestampMs":1710951237737,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 kafka | [2024-03-20 16:13:37,534] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.285053033Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.233473ms 16:15:32 policy-pap | [2024-03-20T16:13:57.771+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange stopping 16:15:32 kafka | [2024-03-20 16:13:37,542] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.291368506Z level=info msg="Executing migration" id="set path collation in file table" 16:15:32 policy-pap | [2024-03-20T16:13:57.772+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange stopping enqueue 16:15:32 kafka | [2024-03-20 16:13:37,543] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.291471288Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=103.992µs 16:15:32 policy-pap | [2024-03-20T16:13:57.772+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange stopping timer 16:15:32 kafka | [2024-03-20 16:13:37,543] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.303062499Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 16:15:32 policy-pap | [2024-03-20T16:13:57.772+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=f0c0319a-f2cf-4921-aa46-d1012f3c7fcc, expireMs=1710951267721] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.303195661Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=139.182µs 16:15:32 kafka | [2024-03-20 16:13:37,543] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.772+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange stopping listener 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.310598369Z level=info msg="Executing migration" id="managed permissions migration" 16:15:32 kafka | [2024-03-20 16:13:37,543] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:57.772+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange stopped 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.311290999Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=698.47µs 16:15:32 kafka | [2024-03-20 16:13:37,550] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:57.772+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpStateChange successful 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.32017387Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 16:15:32 kafka | [2024-03-20 16:13:37,551] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:57.772+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 start publishing next request 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.320457544Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=280.284µs 16:15:32 kafka | [2024-03-20 16:13:37,551] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.773+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.325473408Z level=info msg="Executing migration" id="RBAC action name migrator" 16:15:32 kafka | [2024-03-20 16:13:37,551] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.773+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting listener 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.327533479Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.06042ms 16:15:32 kafka | [2024-03-20 16:13:37,551] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:57.773+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting timer 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.334899767Z level=info msg="Executing migration" id="Add UID column to playlist" 16:15:32 kafka | [2024-03-20 16:13:37,557] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.349266988Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=14.432342ms 16:15:32 kafka | [2024-03-20 16:13:37,558] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:57.773+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=0de8b291-dfee-45f0-b06f-ebca27f12b4d, expireMs=1710951267773] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.356134079Z level=info msg="Executing migration" id="Update uid column values in playlist" 16:15:32 kafka | [2024-03-20 16:13:37,558] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.773+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate starting enqueue 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.356291732Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=158.313µs 16:15:32 kafka | [2024-03-20 16:13:37,558] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.773+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate started 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.360882989Z level=info msg="Executing migration" id="Add index for uid in playlist" 16:15:32 kafka | [2024-03-20 16:13:37,558] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(NJ0Ig4IURHCsgbfyjcgUCg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:57.774+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.362574004Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.690535ms 16:15:32 kafka | [2024-03-20 16:13:37,563] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","timestampMs":1710951237757,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.370346768Z level=info msg="Executing migration" id="update group index for alert rules" 16:15:32 kafka | [2024-03-20 16:13:37,564] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:57.784+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.370811165Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=465.647µs 16:15:32 kafka | [2024-03-20 16:13:37,564] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","timestampMs":1710951237757,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.37658471Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 16:15:32 kafka | [2024-03-20 16:13:37,564] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.784+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.377009676Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=426.326µs 16:15:32 kafka | [2024-03-20 16:13:37,564] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:57.788+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.384252743Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 16:15:32 kafka | [2024-03-20 16:13:37,570] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | {"source":"pap-f154f532-9a94-4862-bc4e-0e5e02394a2f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","timestampMs":1710951237757,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.384939403Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=692.49µs 16:15:32 kafka | [2024-03-20 16:13:37,571] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | [2024-03-20T16:13:57.788+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.390210361Z level=info msg="Executing migration" id="add action column to seed_assignment" 16:15:32 kafka | [2024-03-20 16:13:37,571] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.797+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.399562048Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.351317ms 16:15:32 kafka | [2024-03-20 16:13:37,571] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ff9cb3d2-ec93-4a4a-bb0d-3132bba09d00","timestampMs":1710951237786,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.403709519Z level=info msg="Executing migration" id="add scope column to seed_assignment" 16:15:32 kafka | [2024-03-20 16:13:37,571] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 policy-pap | [2024-03-20T16:13:57.798+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 0de8b291-dfee-45f0-b06f-ebca27f12b4d 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.413608425Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.896836ms 16:15:32 kafka | [2024-03-20 16:13:37,581] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 policy-pap | [2024-03-20T16:13:57.800+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.418557828Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 16:15:32 kafka | [2024-03-20 16:13:37,581] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"0de8b291-dfee-45f0-b06f-ebca27f12b4d","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ff9cb3d2-ec93-4a4a-bb0d-3132bba09d00","timestampMs":1710951237786,"name":"apex-6f1fb960-db30-4a41-af7c-02c6b3f48305","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.41937957Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=823.042µs 16:15:32 kafka | [2024-03-20 16:13:37,581] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.801+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.424575206Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 16:15:32 kafka | [2024-03-20 16:13:37,582] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 policy-pap | [2024-03-20T16:13:57.801+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping enqueue 16:15:32 kafka | [2024-03-20 16:13:37,582] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.495298366Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=70.72026ms 16:15:32 policy-pap | [2024-03-20T16:13:57.801+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping timer 16:15:32 kafka | [2024-03-20 16:13:37,587] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.499187054Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 16:15:32 policy-pap | [2024-03-20T16:13:57.801+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=0de8b291-dfee-45f0-b06f-ebca27f12b4d, expireMs=1710951267773] 16:15:32 kafka | [2024-03-20 16:13:37,588] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.500058657Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=871.303µs 16:15:32 policy-pap | [2024-03-20T16:13:57.802+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopping listener 16:15:32 kafka | [2024-03-20 16:13:37,588] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.50435525Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 16:15:32 policy-pap | [2024-03-20T16:13:57.802+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate stopped 16:15:32 kafka | [2024-03-20 16:13:37,588] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.505405735Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.060835ms 16:15:32 policy-pap | [2024-03-20T16:13:57.808+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 PdpUpdate successful 16:15:32 kafka | [2024-03-20 16:13:37,588] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.511433774Z level=info msg="Executing migration" id="add primary key to seed_assigment" 16:15:32 policy-pap | [2024-03-20T16:13:57.808+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-6f1fb960-db30-4a41-af7c-02c6b3f48305 has no more requests 16:15:32 kafka | [2024-03-20 16:13:37,605] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.534670616Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=23.232182ms 16:15:32 policy-pap | [2024-03-20T16:14:02.392+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 16:15:32 kafka | [2024-03-20 16:13:37,606] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.540581423Z level=info msg="Executing migration" id="add origin column to seed_assignment" 16:15:32 policy-pap | [2024-03-20T16:14:02.405+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 16:15:32 kafka | [2024-03-20 16:13:37,606] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.54721437Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.633827ms 16:15:32 policy-pap | [2024-03-20T16:14:02.787+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,606] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.552227764Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 16:15:32 policy-pap | [2024-03-20T16:14:03.341+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,606] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.552577509Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=350.315µs 16:15:32 policy-pap | [2024-03-20T16:14:03.342+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,667] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.555514393Z level=info msg="Executing migration" id="prevent seeding OnCall access" 16:15:32 policy-pap | [2024-03-20T16:14:03.844+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,668] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.555790917Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=276.584µs 16:15:32 policy-pap | [2024-03-20T16:14:04.074+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 16:15:32 kafka | [2024-03-20 16:13:37,668] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.559220467Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 16:15:32 policy-pap | [2024-03-20T16:14:04.193+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 16:15:32 kafka | [2024-03-20 16:13:37,669] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.559527142Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=305.565µs 16:15:32 policy-pap | [2024-03-20T16:14:04.193+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,669] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.562294412Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 16:15:32 policy-pap | [2024-03-20T16:14:04.193+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,678] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.562552126Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=257.434µs 16:15:32 policy-pap | [2024-03-20T16:14:04.207+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-03-20T16:14:04Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-03-20T16:14:04Z, user=policyadmin)] 16:15:32 kafka | [2024-03-20 16:13:37,680] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.567367857Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 16:15:32 policy-pap | [2024-03-20T16:14:04.881+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,680] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.567631851Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=264.374µs 16:15:32 policy-pap | [2024-03-20T16:14:04.882+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 16:15:32 kafka | [2024-03-20 16:13:37,681] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.570593924Z level=info msg="Executing migration" id="create folder table" 16:15:32 policy-pap | [2024-03-20T16:14:04.882+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 16:15:32 kafka | [2024-03-20 16:13:37,681] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.571593909Z level=info msg="Migration successfully executed" id="create folder table" duration=999.565µs 16:15:32 policy-pap | [2024-03-20T16:14:04.882+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,688] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.574546693Z level=info msg="Executing migration" id="Add index for parent_uid" 16:15:32 policy-pap | [2024-03-20T16:14:04.883+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,690] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.576029114Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.481961ms 16:15:32 policy-pap | [2024-03-20T16:14:04.930+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-20T16:14:04Z, user=policyadmin)] 16:15:32 kafka | [2024-03-20 16:13:37,690] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.583312112Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 16:15:32 policy-pap | [2024-03-20T16:14:05.256+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 16:15:32 kafka | [2024-03-20 16:13:37,690] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.584621871Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.310319ms 16:15:32 policy-pap | [2024-03-20T16:14:05.256+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,690] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.595136956Z level=info msg="Executing migration" id="Update folder title length" 16:15:32 policy-pap | [2024-03-20T16:14:05.256+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 16:15:32 kafka | [2024-03-20 16:13:37,699] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.595317628Z level=info msg="Migration successfully executed" id="Update folder title length" duration=185.663µs 16:15:32 policy-pap | [2024-03-20T16:14:05.256+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 16:15:32 kafka | [2024-03-20 16:13:37,700] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.598446904Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 16:15:32 policy-pap | [2024-03-20T16:14:05.256+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,700] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.600166519Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.720365ms 16:15:32 policy-pap | [2024-03-20T16:14:05.256+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,700] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.602997201Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 16:15:32 policy-pap | [2024-03-20T16:14:05.304+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-20T16:14:05Z, user=policyadmin)] 16:15:32 kafka | [2024-03-20 16:13:37,700] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,708] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.604165188Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.168307ms 16:15:32 policy-pap | [2024-03-20T16:14:25.896+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,710] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.640804247Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 16:15:32 policy-pap | [2024-03-20T16:14:25.898+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 16:15:32 kafka | [2024-03-20 16:13:37,710] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.642829477Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.02489ms 16:15:32 policy-pap | [2024-03-20T16:14:27.621+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=a504a2da-8d70-43be-80c3-3972a17aabb4, expireMs=1710951267621] 16:15:32 kafka | [2024-03-20 16:13:37,711] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.650660622Z level=info msg="Executing migration" id="Sync dashboard and folder table" 16:15:32 policy-pap | [2024-03-20T16:14:27.722+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=f0c0319a-f2cf-4921-aa46-d1012f3c7fcc, expireMs=1710951267721] 16:15:32 kafka | [2024-03-20 16:13:37,711] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.651096519Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=436.167µs 16:15:32 kafka | [2024-03-20 16:13:37,722] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.653940931Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 16:15:32 kafka | [2024-03-20 16:13:37,729] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.654350767Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=409.926µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.658941054Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 16:15:32 kafka | [2024-03-20 16:13:37,729] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.660668659Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.726965ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.663873187Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 16:15:32 kafka | [2024-03-20 16:13:37,729] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.665044604Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.171017ms 16:15:32 kafka | [2024-03-20 16:13:37,729] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.669682272Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 16:15:32 kafka | [2024-03-20 16:13:37,740] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.670735618Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.053146ms 16:15:32 kafka | [2024-03-20 16:13:37,741] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.675916014Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.677088591Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.171897ms 16:15:32 kafka | [2024-03-20 16:13:37,741] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.680117165Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 16:15:32 kafka | [2024-03-20 16:13:37,741] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.681170321Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.051046ms 16:15:32 kafka | [2024-03-20 16:13:37,741] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.688705592Z level=info msg="Executing migration" id="create anon_device table" 16:15:32 kafka | [2024-03-20 16:13:37,753] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.690064282Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.35816ms 16:15:32 kafka | [2024-03-20 16:13:37,754] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.695076896Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 16:15:32 kafka | [2024-03-20 16:13:37,754] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.696353634Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.276678ms 16:15:32 kafka | [2024-03-20 16:13:37,754] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 kafka | [2024-03-20 16:13:37,755] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,763] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 kafka | [2024-03-20 16:13:37,764] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:37,764] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 16:15:32 kafka | [2024-03-20 16:13:37,764] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 kafka | [2024-03-20 16:13:37,764] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,771] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 kafka | [2024-03-20 16:13:37,771] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:37,771] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 16:15:32 kafka | [2024-03-20 16:13:37,772] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 kafka | [2024-03-20 16:13:37,772] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,779] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 16:15:32 kafka | [2024-03-20 16:13:37,780] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 16:15:32 kafka | [2024-03-20 16:13:37,780] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 16:15:32 kafka | [2024-03-20 16:13:37,780] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 16:15:32 kafka | [2024-03-20 16:13:37,781] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(QvJGL4ltS_qYQrjK6IZK9A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,788] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,789] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.700385194Z level=info msg="Executing migration" id="add index anon_device.updated_at" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.701540941Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.158157ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.706069457Z level=info msg="Executing migration" id="create signing_key table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.707003521Z level=info msg="Migration successfully executed" id="create signing_key table" duration=934.074µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.710255229Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.711434977Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.179197ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.7143852Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.715548757Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.163297ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.720260186Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.720578421Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=319.635µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.727579624Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.740993531Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.413867ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.743741772Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.744270309Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=529.567µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.74842698Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.749633498Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.206258ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.753305152Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.754404019Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.099026ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.757462284Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.75859792Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.135666ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.762752881Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.764172362Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.418171ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.767724575Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.768977373Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.250228ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.777029261Z level=info msg="Executing migration" id="create sso_setting table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.77894583Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.915769ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.785538777Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.786497251Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=959.104µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.789617207Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.789936701Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=319.894µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.794515189Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.79460894Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=94.751µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.798890463Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.812470453Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=13.58019ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.819201452Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.826663202Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.45879ms 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.830417757Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.830749202Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=331.455µs 16:15:32 grafana | logger=migrator t=2024-03-20T16:13:05.834871573Z level=info msg="migrations completed" performed=547 skipped=0 duration=4.75806066s 16:15:32 grafana | logger=sqlstore t=2024-03-20T16:13:05.844843229Z level=info msg="Created default admin" user=admin 16:15:32 grafana | logger=sqlstore t=2024-03-20T16:13:05.845151544Z level=info msg="Created default organization" 16:15:32 grafana | logger=secrets t=2024-03-20T16:13:05.854066355Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 16:15:32 grafana | logger=plugin.store t=2024-03-20T16:13:05.892664353Z level=info msg="Loading plugins..." 16:15:32 grafana | logger=local.finder t=2024-03-20T16:13:05.931031397Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 16:15:32 grafana | logger=plugin.store t=2024-03-20T16:13:05.931062568Z level=info msg="Plugins loaded" count=55 duration=38.399406ms 16:15:32 grafana | logger=query_data t=2024-03-20T16:13:05.933482193Z level=info msg="Query Service initialization" 16:15:32 grafana | logger=live.push_http t=2024-03-20T16:13:05.936565779Z level=info msg="Live Push Gateway initialization" 16:15:32 grafana | logger=ngalert.migration t=2024-03-20T16:13:05.943744414Z level=info msg=Starting 16:15:32 grafana | logger=ngalert.migration t=2024-03-20T16:13:05.94411303Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 16:15:32 grafana | logger=ngalert.migration orgID=1 t=2024-03-20T16:13:05.944437654Z level=info msg="Migrating alerts for organisation" 16:15:32 grafana | logger=ngalert.migration orgID=1 t=2024-03-20T16:13:05.944926152Z level=info msg="Alerts found to migrate" alerts=0 16:15:32 grafana | logger=ngalert.migration t=2024-03-20T16:13:05.946220321Z level=info msg="Completed alerting migration" 16:15:32 grafana | logger=ngalert.state.manager t=2024-03-20T16:13:05.975319649Z level=info msg="Running in alternative execution of Error/NoData mode" 16:15:32 grafana | logger=infra.usagestats.collector t=2024-03-20T16:13:05.977317018Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 16:15:32 grafana | logger=provisioning.datasources t=2024-03-20T16:13:05.979788444Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 16:15:32 grafana | logger=provisioning.alerting t=2024-03-20T16:13:05.996436289Z level=info msg="starting to provision alerting" 16:15:32 grafana | logger=provisioning.alerting t=2024-03-20T16:13:05.996569191Z level=info msg="finished to provision alerting" 16:15:32 grafana | logger=ngalert.multiorg.alertmanager t=2024-03-20T16:13:05.996917496Z level=info msg="Starting MultiOrg Alertmanager" 16:15:32 grafana | logger=ngalert.state.manager t=2024-03-20T16:13:05.996882876Z level=info msg="Warming state cache for startup" 16:15:32 grafana | logger=ngalert.state.manager t=2024-03-20T16:13:05.997454404Z level=info msg="State cache has been initialized" states=0 duration=571.318µs 16:15:32 grafana | logger=ngalert.scheduler t=2024-03-20T16:13:05.997625507Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 16:15:32 grafana | logger=ticker t=2024-03-20T16:13:05.9978176Z level=info msg=starting first_tick=2024-03-20T16:13:10Z 16:15:32 grafana | logger=grafanaStorageLogger t=2024-03-20T16:13:05.998343307Z level=info msg="Storage starting" 16:15:32 grafana | logger=http.server t=2024-03-20T16:13:06.002502269Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 16:15:32 grafana | logger=plugins.update.checker t=2024-03-20T16:13:06.133585327Z level=info msg="Update check succeeded" duration=136.395607ms 16:15:32 grafana | logger=grafana.update.checker t=2024-03-20T16:13:06.134527401Z level=info msg="Update check succeeded" duration=137.162598ms 16:15:32 grafana | logger=provisioning.dashboard t=2024-03-20T16:13:06.165128511Z level=info msg="starting to provision dashboards" 16:15:32 grafana | logger=grafana-apiserver t=2024-03-20T16:13:06.266872047Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 16:15:32 grafana | logger=grafana-apiserver t=2024-03-20T16:13:06.267300844Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 16:15:32 grafana | logger=provisioning.dashboard t=2024-03-20T16:13:06.387942378Z level=info msg="finished to provision dashboards" 16:15:32 grafana | logger=infra.usagestats t=2024-03-20T16:14:57.009651912Z level=info msg="Usage stats are ready to report" 16:15:32 kafka | [2024-03-20 16:13:37,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,790] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,798] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,799] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,804] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,805] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,807] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 7 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,811] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,812] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,813] INFO [Broker id=1] Finished LeaderAndIsr request in 736ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,813] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,814] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,815] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,816] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,817] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,818] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,819] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,820] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,821] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,822] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,822] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,822] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,823] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,823] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 16:15:32 kafka | [2024-03-20 16:13:37,826] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=QvJGL4ltS_qYQrjK6IZK9A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=NJ0Ig4IURHCsgbfyjcgUCg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,839] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,840] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,841] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,842] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 16:15:32 kafka | [2024-03-20 16:13:37,881] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group cd3571d2-bf35-4e38-b6c0-741ea8425298 in Empty state. Created a new member id consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3-6979a0d6-81ba-4972-a608-c19ab7cdc18b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,900] INFO [GroupCoordinator 1]: Preparing to rebalance group cd3571d2-bf35-4e38-b6c0-741ea8425298 in state PreparingRebalance with old generation 0 (__consumer_offsets-10) (reason: Adding new member consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3-6979a0d6-81ba-4972-a608-c19ab7cdc18b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,906] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-6c72e110-e158-4eb3-95a3-9ff5ed05d348 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,909] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-6c72e110-e158-4eb3-95a3-9ff5ed05d348 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,964] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 2f5e0a58-910b-431c-bb29-e00354420c7f in Empty state. Created a new member id consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2-a4cfeb9b-05bc-4cc7-acd4-28b5f4173a72 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:37,969] INFO [GroupCoordinator 1]: Preparing to rebalance group 2f5e0a58-910b-431c-bb29-e00354420c7f in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2-a4cfeb9b-05bc-4cc7-acd4-28b5f4173a72 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:40,913] INFO [GroupCoordinator 1]: Stabilized group cd3571d2-bf35-4e38-b6c0-741ea8425298 generation 1 (__consumer_offsets-10) with 1 members (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:40,920] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:40,931] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-6c72e110-e158-4eb3-95a3-9ff5ed05d348 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:40,931] INFO [GroupCoordinator 1]: Assignment received from leader consumer-cd3571d2-bf35-4e38-b6c0-741ea8425298-3-6979a0d6-81ba-4972-a608-c19ab7cdc18b for group cd3571d2-bf35-4e38-b6c0-741ea8425298 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:40,983] INFO [GroupCoordinator 1]: Stabilized group 2f5e0a58-910b-431c-bb29-e00354420c7f generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) 16:15:32 kafka | [2024-03-20 16:13:41,003] INFO [GroupCoordinator 1]: Assignment received from leader consumer-2f5e0a58-910b-431c-bb29-e00354420c7f-2-a4cfeb9b-05bc-4cc7-acd4-28b5f4173a72 for group 2f5e0a58-910b-431c-bb29-e00354420c7f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 16:15:32 ++ echo 'Tearing down containers...' 16:15:32 Tearing down containers... 16:15:32 ++ docker-compose down -v --remove-orphans 16:15:33 Stopping policy-apex-pdp ... 16:15:33 Stopping policy-pap ... 16:15:33 Stopping kafka ... 16:15:33 Stopping policy-api ... 16:15:33 Stopping grafana ... 16:15:33 Stopping mariadb ... 16:15:33 Stopping simulator ... 16:15:33 Stopping compose_zookeeper_1 ... 16:15:33 Stopping prometheus ... 16:15:34 Stopping grafana ... done 16:15:34 Stopping prometheus ... done 16:15:43 Stopping policy-apex-pdp ... done 16:15:54 Stopping simulator ... done 16:15:54 Stopping policy-pap ... done 16:15:55 Stopping mariadb ... done 16:15:55 Stopping kafka ... done 16:15:55 Stopping compose_zookeeper_1 ... done 16:16:04 Stopping policy-api ... done 16:16:04 Removing policy-apex-pdp ... 16:16:04 Removing policy-pap ... 16:16:04 Removing kafka ... 16:16:04 Removing policy-api ... 16:16:04 Removing policy-db-migrator ... 16:16:04 Removing grafana ... 16:16:04 Removing mariadb ... 16:16:04 Removing simulator ... 16:16:04 Removing compose_zookeeper_1 ... 16:16:04 Removing prometheus ... 16:16:04 Removing grafana ... done 16:16:04 Removing policy-api ... done 16:16:04 Removing prometheus ... done 16:16:04 Removing policy-pap ... done 16:16:04 Removing simulator ... done 16:16:04 Removing policy-apex-pdp ... done 16:16:04 Removing mariadb ... done 16:16:04 Removing policy-db-migrator ... done 16:16:04 Removing kafka ... done 16:16:04 Removing compose_zookeeper_1 ... done 16:16:04 Removing network compose_default 16:16:04 ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap 16:16:04 + load_set 16:16:04 + _setopts=hxB 16:16:04 ++ echo braceexpand:hashall:interactive-comments:xtrace 16:16:04 ++ tr : ' ' 16:16:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:16:04 + set +o braceexpand 16:16:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:16:04 + set +o hashall 16:16:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:16:04 + set +o interactive-comments 16:16:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 16:16:04 + set +o xtrace 16:16:05 ++ echo hxB 16:16:05 ++ sed 's/./& /g' 16:16:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:16:05 + set +h 16:16:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 16:16:05 + set +x 16:16:05 + [[ -n /tmp/tmp.STfXWGE5EF ]] 16:16:05 + rsync -av /tmp/tmp.STfXWGE5EF/ /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 16:16:05 sending incremental file list 16:16:05 ./ 16:16:05 log.html 16:16:05 output.xml 16:16:05 report.html 16:16:05 testplan.txt 16:16:05 16:16:05 sent 920,075 bytes received 95 bytes 1,840,340.00 bytes/sec 16:16:05 total size is 919,529 speedup is 1.00 16:16:05 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models 16:16:05 + exit 1 16:16:05 Build step 'Execute shell' marked build as failure 16:16:05 $ ssh-agent -k 16:16:05 unset SSH_AUTH_SOCK; 16:16:05 unset SSH_AGENT_PID; 16:16:05 echo Agent pid 2101 killed; 16:16:05 [ssh-agent] Stopped. 16:16:05 Robot results publisher started... 16:16:05 INFO: Checking test criticality is deprecated and will be dropped in a future release! 16:16:05 -Parsing output xml: 16:16:05 Done! 16:16:05 WARNING! Could not find file: **/log.html 16:16:05 WARNING! Could not find file: **/report.html 16:16:05 -Copying log files to build dir: 16:16:05 Done! 16:16:05 -Assigning results to build: 16:16:05 Done! 16:16:05 -Checking thresholds: 16:16:05 Done! 16:16:05 Done publishing Robot results. 16:16:05 [PostBuildScript] - [INFO] Executing post build scripts. 16:16:05 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins8256188345274938250.sh 16:16:05 ---> sysstat.sh 16:16:06 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins3686401708647505560.sh 16:16:06 ---> package-listing.sh 16:16:06 ++ tr '[:upper:]' '[:lower:]' 16:16:06 ++ facter osfamily 16:16:06 + OS_FAMILY=debian 16:16:06 + workspace=/w/workspace/policy-pap-master-project-csit-verify-pap 16:16:06 + START_PACKAGES=/tmp/packages_start.txt 16:16:06 + END_PACKAGES=/tmp/packages_end.txt 16:16:06 + DIFF_PACKAGES=/tmp/packages_diff.txt 16:16:06 + PACKAGES=/tmp/packages_start.txt 16:16:06 + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' 16:16:06 + PACKAGES=/tmp/packages_end.txt 16:16:06 + case "${OS_FAMILY}" in 16:16:06 + grep '^ii' 16:16:06 + dpkg -l 16:16:06 + '[' -f /tmp/packages_start.txt ']' 16:16:06 + '[' -f /tmp/packages_end.txt ']' 16:16:06 + diff /tmp/packages_start.txt /tmp/packages_end.txt 16:16:06 + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' 16:16:06 + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ 16:16:06 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ 16:16:06 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins7896556952197293794.sh 16:16:06 ---> capture-instance-metadata.sh 16:16:06 Setup pyenv: 16:16:06 system 16:16:06 3.8.13 16:16:06 3.9.13 16:16:06 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 16:16:06 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5zh5 from file:/tmp/.os_lf_venv 16:16:08 lf-activate-venv(): INFO: Installing: lftools 16:16:17 lf-activate-venv(): INFO: Adding /tmp/venv-5zh5/bin to PATH 16:16:17 INFO: Running in OpenStack, capturing instance metadata 16:16:18 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins13186452596070920656.sh 16:16:18 provisioning config files... 16:16:18 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/config13875912983105132099tmp 16:16:18 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 16:16:18 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 16:16:18 [EnvInject] - Injecting environment variables from a build step. 16:16:18 [EnvInject] - Injecting as environment variables the properties content 16:16:18 SERVER_ID=logs 16:16:18 16:16:18 [EnvInject] - Variables injected successfully. 16:16:18 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins16954050892150768452.sh 16:16:18 ---> create-netrc.sh 16:16:18 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins12388256398393268121.sh 16:16:18 ---> python-tools-install.sh 16:16:18 Setup pyenv: 16:16:18 system 16:16:18 3.8.13 16:16:18 3.9.13 16:16:18 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 16:16:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5zh5 from file:/tmp/.os_lf_venv 16:16:19 lf-activate-venv(): INFO: Installing: lftools 16:16:28 lf-activate-venv(): INFO: Adding /tmp/venv-5zh5/bin to PATH 16:16:28 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins10092312882038452567.sh 16:16:28 ---> sudo-logs.sh 16:16:28 Archiving 'sudo' log.. 16:16:28 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins13465431986496112363.sh 16:16:28 ---> job-cost.sh 16:16:28 Setup pyenv: 16:16:28 system 16:16:28 3.8.13 16:16:28 3.9.13 16:16:28 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 16:16:28 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5zh5 from file:/tmp/.os_lf_venv 16:16:29 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 16:16:34 lf-activate-venv(): INFO: Adding /tmp/venv-5zh5/bin to PATH 16:16:34 INFO: No Stack... 16:16:34 INFO: Retrieving Pricing Info for: v3-standard-8 16:16:35 INFO: Archiving Costs 16:16:35 [policy-pap-master-project-csit-verify-pap] $ /bin/bash -l /tmp/jenkins12829264614575201717.sh 16:16:35 ---> logs-deploy.sh 16:16:35 Setup pyenv: 16:16:35 system 16:16:35 3.8.13 16:16:35 3.9.13 16:16:35 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 16:16:35 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-5zh5 from file:/tmp/.os_lf_venv 16:16:36 lf-activate-venv(): INFO: Installing: lftools 16:16:44 lf-activate-venv(): INFO: Adding /tmp/venv-5zh5/bin to PATH 16:16:44 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-verify-pap/515 16:16:44 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 16:16:45 Archives upload complete. 16:16:46 INFO: archiving logs to Nexus 16:16:47 ---> uname -a: 16:16:47 Linux prd-ubuntu1804-docker-8c-8g-14655 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 16:16:47 16:16:47 16:16:47 ---> lscpu: 16:16:47 Architecture: x86_64 16:16:47 CPU op-mode(s): 32-bit, 64-bit 16:16:47 Byte Order: Little Endian 16:16:47 CPU(s): 8 16:16:47 On-line CPU(s) list: 0-7 16:16:47 Thread(s) per core: 1 16:16:47 Core(s) per socket: 1 16:16:47 Socket(s): 8 16:16:47 NUMA node(s): 1 16:16:47 Vendor ID: AuthenticAMD 16:16:47 CPU family: 23 16:16:47 Model: 49 16:16:47 Model name: AMD EPYC-Rome Processor 16:16:47 Stepping: 0 16:16:47 CPU MHz: 2800.000 16:16:47 BogoMIPS: 5600.00 16:16:47 Virtualization: AMD-V 16:16:47 Hypervisor vendor: KVM 16:16:47 Virtualization type: full 16:16:47 L1d cache: 32K 16:16:47 L1i cache: 32K 16:16:47 L2 cache: 512K 16:16:47 L3 cache: 16384K 16:16:47 NUMA node0 CPU(s): 0-7 16:16:47 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 16:16:47 16:16:47 16:16:47 ---> nproc: 16:16:47 8 16:16:47 16:16:47 16:16:47 ---> df -h: 16:16:47 Filesystem Size Used Avail Use% Mounted on 16:16:47 udev 16G 0 16G 0% /dev 16:16:47 tmpfs 3.2G 708K 3.2G 1% /run 16:16:47 /dev/vda1 155G 14G 142G 9% / 16:16:47 tmpfs 16G 0 16G 0% /dev/shm 16:16:47 tmpfs 5.0M 0 5.0M 0% /run/lock 16:16:47 tmpfs 16G 0 16G 0% /sys/fs/cgroup 16:16:47 /dev/vda15 105M 4.4M 100M 5% /boot/efi 16:16:47 tmpfs 3.2G 0 3.2G 0% /run/user/1001 16:16:47 16:16:47 16:16:47 ---> free -m: 16:16:47 total used free shared buff/cache available 16:16:47 Mem: 32167 866 24851 0 6449 30844 16:16:47 Swap: 1023 0 1023 16:16:47 16:16:47 16:16:47 ---> ip addr: 16:16:47 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 16:16:47 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 16:16:47 inet 127.0.0.1/8 scope host lo 16:16:47 valid_lft forever preferred_lft forever 16:16:47 inet6 ::1/128 scope host 16:16:47 valid_lft forever preferred_lft forever 16:16:47 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 16:16:47 link/ether fa:16:3e:65:00:44 brd ff:ff:ff:ff:ff:ff 16:16:47 inet 10.30.107.99/23 brd 10.30.107.255 scope global dynamic ens3 16:16:47 valid_lft 85925sec preferred_lft 85925sec 16:16:47 inet6 fe80::f816:3eff:fe65:44/64 scope link 16:16:47 valid_lft forever preferred_lft forever 16:16:47 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 16:16:47 link/ether 02:42:a3:2f:1e:7c brd ff:ff:ff:ff:ff:ff 16:16:47 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 16:16:47 valid_lft forever preferred_lft forever 16:16:47 16:16:47 16:16:47 ---> sar -b -r -n DEV: 16:16:47 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14655) 03/20/24 _x86_64_ (8 CPU) 16:16:47 16:16:47 16:08:54 LINUX RESTART (8 CPU) 16:16:47 16:16:47 16:09:01 tps rtps wtps bread/s bwrtn/s 16:16:47 16:10:02 326.67 74.46 252.22 5315.96 56754.68 16:16:47 16:11:01 118.93 27.40 91.53 2119.10 27694.29 16:16:47 16:12:01 133.26 9.55 123.71 1675.05 31010.70 16:16:47 16:13:01 290.90 5.45 285.45 501.88 149329.06 16:16:47 16:14:01 226.18 6.63 219.55 303.22 25863.39 16:16:47 16:15:01 14.88 0.00 14.88 0.00 17287.87 16:16:47 16:16:01 56.44 0.02 56.42 2.40 19161.76 16:16:47 Average: 166.87 17.62 149.24 1415.19 46773.78 16:16:47 16:16:47 16:09:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 16:16:47 16:10:02 30332432 31692856 2606788 7.91 51608 1630236 1457256 4.29 850392 1486916 55552 16:16:47 16:11:01 29958292 31703516 2980928 9.05 79728 1967920 1426628 4.20 874568 1797220 140164 16:16:47 16:12:01 27005628 31635056 5933592 18.01 129968 4682764 1606848 4.73 1044928 4420968 2547272 16:16:47 16:13:01 25201196 31368716 7738024 23.49 146992 6133536 4736808 13.94 1340596 5826936 11236 16:16:47 16:14:01 23221640 29517728 9717580 29.50 158668 6236216 8887312 26.15 3353948 5749084 212 16:16:47 16:15:01 23223144 29519820 9716076 29.50 158820 6236440 8847356 26.03 3355092 5746880 304 16:16:47 16:16:01 24723976 31037924 8215244 24.94 159648 6263396 2472308 7.27 1890756 5758532 88 16:16:47 Average: 26238044 30925088 6701176 20.34 126490 4735787 4204931 12.37 1815754 4398077 393547 16:16:47 16:16:47 16:09:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 16:16:47 16:10:02 lo 1.87 1.87 0.19 0.19 0.00 0.00 0.00 0.00 16:16:47 16:10:02 ens3 420.08 271.33 961.76 73.83 0.00 0.00 0.00 0.00 16:16:47 16:10:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:16:47 16:11:01 lo 1.42 1.42 0.14 0.14 0.00 0.00 0.00 0.00 16:16:47 16:11:01 ens3 65.16 44.38 1034.92 8.69 0.00 0.00 0.00 0.00 16:16:47 16:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:16:47 16:12:01 lo 9.33 9.33 0.93 0.93 0.00 0.00 0.00 0.00 16:16:47 16:12:01 br-fc4fe9a57db1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:16:47 16:12:01 ens3 977.79 562.87 19789.36 44.52 0.00 0.00 0.00 0.00 16:16:47 16:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16:16:47 16:13:01 veth6569cb5 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 16:16:47 16:13:01 veth29272f2 0.05 0.15 0.00 0.01 0.00 0.00 0.00 0.00 16:16:47 16:13:01 veth191df2c 0.07 0.18 0.00 0.01 0.00 0.00 0.00 0.00 16:16:47 16:13:01 veth42706c7 0.00 0.20 0.00 0.02 0.00 0.00 0.00 0.00 16:16:47 16:14:01 veth6569cb5 0.60 0.87 0.06 0.31 0.00 0.00 0.00 0.00 16:16:47 16:14:01 veth29272f2 53.11 47.01 19.82 40.37 0.00 0.00 0.00 0.00 16:16:47 16:14:01 veth8998ac1 10.20 10.26 1.86 1.57 0.00 0.00 0.00 0.00 16:16:47 16:14:01 vethec61916 14.86 13.95 1.96 1.96 0.00 0.00 0.00 0.00 16:16:47 16:15:01 veth6569cb5 0.23 0.17 0.02 0.01 0.00 0.00 0.00 0.00 16:16:47 16:15:01 veth29272f2 0.47 0.47 0.63 0.08 0.00 0.00 0.00 0.00 16:16:47 16:15:01 veth8998ac1 43.02 39.99 10.73 35.99 0.00 0.00 0.00 0.00 16:16:47 16:15:01 vethec61916 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 16:16:47 16:16:01 veth29272f2 0.47 0.73 0.03 0.05 0.00 0.00 0.00 0.00 16:16:47 16:16:01 lo 34.39 34.39 6.16 6.16 0.00 0.00 0.00 0.00 16:16:47 16:16:01 br-fc4fe9a57db1 4.28 4.70 2.00 2.19 0.00 0.00 0.00 0.00 16:16:47 16:16:01 ens3 1904.43 1136.33 35275.25 164.84 0.00 0.00 0.00 0.00 16:16:47 Average: veth29272f2 7.75 6.92 2.93 5.80 0.00 0.00 0.00 0.00 16:16:47 Average: lo 4.50 4.50 0.85 0.85 0.00 0.00 0.00 0.00 16:16:47 Average: br-fc4fe9a57db1 0.61 0.67 0.29 0.31 0.00 0.00 0.00 0.00 16:16:47 Average: ens3 272.04 162.03 5050.39 23.54 0.00 0.00 0.00 0.00 16:16:47 16:16:47 16:16:47 ---> sar -P ALL: 16:16:47 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14655) 03/20/24 _x86_64_ (8 CPU) 16:16:47 16:16:47 16:08:54 LINUX RESTART (8 CPU) 16:16:47 16:16:47 16:09:01 CPU %user %nice %system %iowait %steal %idle 16:16:47 16:10:02 all 7.74 0.00 1.24 4.15 0.04 86.84 16:16:47 16:10:02 0 7.78 0.00 0.90 0.75 0.03 90.53 16:16:47 16:10:02 1 13.73 0.00 1.28 3.89 0.03 81.07 16:16:47 16:10:02 2 7.40 0.00 0.89 0.63 0.02 91.06 16:16:47 16:10:02 3 9.29 0.00 2.29 0.30 0.03 88.09 16:16:47 16:10:02 4 6.98 0.00 1.64 3.99 0.07 87.32 16:16:47 16:10:02 5 2.51 0.00 1.01 21.49 0.05 74.94 16:16:47 16:10:02 6 3.56 0.00 0.82 1.00 0.03 94.59 16:16:47 16:10:02 7 10.60 0.00 1.04 1.22 0.05 87.09 16:16:47 16:11:01 all 10.05 0.00 0.88 2.24 0.04 86.80 16:16:47 16:11:01 0 14.50 0.00 1.12 0.94 0.03 83.41 16:16:47 16:11:01 1 6.43 0.00 0.88 4.51 0.07 88.11 16:16:47 16:11:01 2 12.48 0.00 1.14 0.80 0.07 85.52 16:16:47 16:11:01 3 12.66 0.00 1.02 0.88 0.03 85.41 16:16:47 16:11:01 4 18.31 0.00 1.22 2.48 0.03 77.96 16:16:47 16:11:01 5 10.16 0.00 0.97 8.24 0.03 80.60 16:16:47 16:11:01 6 3.37 0.00 0.37 0.05 0.02 96.19 16:16:47 16:11:01 7 2.49 0.00 0.31 0.03 0.00 97.17 16:16:47 16:12:01 all 13.31 0.00 4.37 1.99 0.07 80.26 16:16:47 16:12:01 0 16.37 0.00 4.17 0.85 0.07 78.55 16:16:47 16:12:01 1 13.22 0.00 4.14 6.72 0.07 75.86 16:16:47 16:12:01 2 9.73 0.00 4.49 0.53 0.05 85.20 16:16:47 16:12:01 3 8.30 0.00 4.11 5.90 0.08 81.61 16:16:47 16:12:01 4 25.43 0.00 3.77 1.05 0.07 69.68 16:16:47 16:12:01 5 17.44 0.00 4.99 0.64 0.07 76.86 16:16:47 16:12:01 6 8.84 0.00 4.90 0.17 0.07 86.01 16:16:47 16:12:01 7 7.16 0.00 4.41 0.03 0.07 88.33 16:16:47 16:13:01 all 6.16 0.00 3.07 11.25 0.05 79.47 16:16:47 16:13:01 0 4.91 0.00 2.24 10.81 0.05 81.99 16:16:47 16:13:01 1 5.75 0.00 2.94 35.29 0.08 55.94 16:16:47 16:13:01 2 6.83 0.00 3.14 1.18 0.03 88.82 16:16:47 16:13:01 3 10.45 0.00 2.61 4.38 0.03 82.52 16:16:47 16:13:01 4 4.75 0.00 5.15 21.53 0.07 68.50 16:16:47 16:13:01 5 4.95 0.00 2.63 9.62 0.05 82.75 16:16:47 16:13:01 6 6.33 0.00 2.70 3.14 0.05 87.78 16:16:47 16:13:01 7 5.28 0.00 3.10 4.18 0.03 87.40 16:16:47 16:14:01 all 28.23 0.00 3.23 1.93 0.09 66.52 16:16:47 16:14:01 0 24.74 0.00 3.11 4.46 0.10 67.58 16:16:47 16:14:01 1 35.96 0.00 4.07 3.64 0.08 56.25 16:16:47 16:14:01 2 26.67 0.00 3.27 2.80 0.10 67.15 16:16:47 16:14:01 3 34.62 0.00 3.62 0.59 0.08 61.09 16:16:47 16:14:01 4 28.95 0.00 3.10 2.01 0.08 65.86 16:16:47 16:14:01 5 23.60 0.00 2.75 0.20 0.07 73.38 16:16:47 16:14:01 6 24.64 0.00 3.27 0.62 0.08 71.39 16:16:47 16:14:01 7 26.68 0.00 2.66 1.19 0.07 69.41 16:16:47 16:15:01 all 3.64 0.00 0.37 1.11 0.05 94.82 16:16:47 16:15:01 0 2.45 0.00 0.30 0.23 0.05 96.97 16:16:47 16:15:01 1 3.95 0.00 0.43 8.58 0.08 86.95 16:16:47 16:15:01 2 3.96 0.00 0.42 0.00 0.07 95.56 16:16:47 16:15:01 3 3.60 0.00 0.23 0.00 0.05 96.11 16:16:47 16:15:01 4 2.96 0.00 0.32 0.02 0.03 96.68 16:16:47 16:15:01 5 3.81 0.00 0.30 0.00 0.07 95.83 16:16:47 16:15:01 6 3.17 0.00 0.40 0.08 0.03 96.32 16:16:47 16:15:01 7 5.27 0.00 0.55 0.00 0.03 94.14 16:16:47 16:16:01 all 1.32 0.00 0.48 1.21 0.05 96.95 16:16:47 16:16:01 0 1.51 0.00 0.62 0.05 0.05 97.77 16:16:47 16:16:01 1 1.95 0.00 0.48 8.53 0.05 88.98 16:16:47 16:16:01 2 1.02 0.00 0.50 0.00 0.03 98.45 16:16:47 16:16:01 3 1.09 0.00 0.45 0.25 0.05 98.16 16:16:47 16:16:01 4 0.92 0.00 0.43 0.07 0.05 98.53 16:16:47 16:16:01 5 1.77 0.00 0.45 0.08 0.05 97.64 16:16:47 16:16:01 6 1.47 0.00 0.45 0.23 0.07 97.78 16:16:47 16:16:01 7 0.80 0.00 0.42 0.43 0.02 98.33 16:16:47 Average: all 10.05 0.00 1.94 3.41 0.05 84.54 16:16:47 Average: 0 10.30 0.00 1.78 2.58 0.06 85.29 16:16:47 Average: 1 11.58 0.00 2.03 10.14 0.07 76.18 16:16:47 Average: 2 9.71 0.00 1.97 0.85 0.05 87.41 16:16:47 Average: 3 11.42 0.00 2.04 1.75 0.05 84.73 16:16:47 Average: 4 12.59 0.00 2.23 4.44 0.06 80.69 16:16:47 Average: 5 9.16 0.00 1.87 5.74 0.06 83.17 16:16:47 Average: 6 7.34 0.00 1.84 0.76 0.05 90.01 16:16:47 Average: 7 8.34 0.00 1.78 1.01 0.04 88.83 16:16:47 16:16:47 16:16:47