09:29:52 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137060 09:29:52 Running as SYSTEM 09:29:52 [EnvInject] - Loading node environment variables. 09:29:52 Building remotely on prd-ubuntu1804-docker-8c-8g-14120 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-verify-pap 09:29:52 [ssh-agent] Looking for ssh-agent implementation... 09:29:52 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 09:29:52 $ ssh-agent 09:29:52 SSH_AUTH_SOCK=/tmp/ssh-Xzx8b0uG8sas/agent.2079 09:29:52 SSH_AGENT_PID=2081 09:29:52 [ssh-agent] Started. 09:29:53 Running ssh-add (command line suppressed) 09:29:53 Identity added: /w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_12586478260995042326.key (/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_12586478260995042326.key) 09:29:53 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 09:29:53 The recommended git tool is: NONE 09:29:54 using credential onap-jenkins-ssh 09:29:54 Wiping out workspace first. 09:29:54 Cloning the remote Git repository 09:29:54 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 09:29:54 > git init /w/workspace/policy-pap-master-project-csit-verify-pap # timeout=10 09:29:55 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 09:29:55 > git --version # timeout=10 09:29:55 > git --version # 'git version 2.17.1' 09:29:55 using GIT_SSH to set credentials Gerrit user 09:29:55 Verifying host key using manually-configured host key entries 09:29:55 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 09:29:55 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 09:29:55 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 09:29:56 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 09:29:56 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 09:29:56 using GIT_SSH to set credentials Gerrit user 09:29:56 Verifying host key using manually-configured host key entries 09:29:56 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/60/137060/1 # timeout=30 09:29:56 > git rev-parse cf05a6c49bcdb598e106b0c4ae811af750608df3^{commit} # timeout=10 09:29:56 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 09:29:56 Checking out Revision cf05a6c49bcdb598e106b0c4ae811af750608df3 (refs/changes/60/137060/1) 09:29:56 > git config core.sparsecheckout # timeout=10 09:29:56 > git checkout -f cf05a6c49bcdb598e106b0c4ae811af750608df3 # timeout=30 09:29:56 Commit message: "Add kafka support in K8s CSIT" 09:29:56 > git rev-parse FETCH_HEAD^{commit} # timeout=10 09:29:56 > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 09:29:56 provisioning config files... 09:29:56 copy managed file [npmrc] to file:/home/jenkins/.npmrc 09:29:56 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 09:29:56 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins13447386204884651252.sh 09:29:56 ---> python-tools-install.sh 09:29:56 Setup pyenv: 09:29:57 * system (set by /opt/pyenv/version) 09:29:57 * 3.8.13 (set by /opt/pyenv/version) 09:29:57 * 3.9.13 (set by /opt/pyenv/version) 09:29:57 * 3.10.6 (set by /opt/pyenv/version) 09:30:02 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-SZ0L 09:30:02 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 09:30:05 lf-activate-venv(): INFO: Installing: lftools 09:30:42 lf-activate-venv(): INFO: Adding /tmp/venv-SZ0L/bin to PATH 09:30:42 Generating Requirements File 09:31:17 Python 3.10.6 09:31:17 pip 23.3.2 from /tmp/venv-SZ0L/lib/python3.10/site-packages/pip (python 3.10) 09:31:18 appdirs==1.4.4 09:31:18 argcomplete==3.2.1 09:31:18 aspy.yaml==1.3.0 09:31:18 attrs==23.2.0 09:31:18 autopage==0.5.2 09:31:18 beautifulsoup4==4.12.3 09:31:18 boto3==1.34.23 09:31:18 botocore==1.34.23 09:31:18 bs4==0.0.2 09:31:18 cachetools==5.3.2 09:31:18 certifi==2023.11.17 09:31:18 cffi==1.16.0 09:31:18 cfgv==3.4.0 09:31:18 chardet==5.2.0 09:31:18 charset-normalizer==3.3.2 09:31:18 click==8.1.7 09:31:18 cliff==4.5.0 09:31:18 cmd2==2.4.3 09:31:18 cryptography==3.3.2 09:31:18 debtcollector==2.5.0 09:31:18 decorator==5.1.1 09:31:18 defusedxml==0.7.1 09:31:18 Deprecated==1.2.14 09:31:18 distlib==0.3.8 09:31:18 dnspython==2.5.0 09:31:18 docker==4.2.2 09:31:18 dogpile.cache==1.3.0 09:31:18 email-validator==2.1.0.post1 09:31:18 filelock==3.13.1 09:31:18 future==0.18.3 09:31:18 gitdb==4.0.11 09:31:18 GitPython==3.1.41 09:31:18 google-auth==2.26.2 09:31:18 httplib2==0.22.0 09:31:18 identify==2.5.33 09:31:18 idna==3.6 09:31:18 importlib-resources==1.5.0 09:31:18 iso8601==2.1.0 09:31:18 Jinja2==3.1.3 09:31:18 jmespath==1.0.1 09:31:18 jsonpatch==1.33 09:31:18 jsonpointer==2.4 09:31:18 jsonschema==4.21.1 09:31:18 jsonschema-specifications==2023.12.1 09:31:18 keystoneauth1==5.5.0 09:31:18 kubernetes==29.0.0 09:31:18 lftools==0.37.8 09:31:18 lxml==5.1.0 09:31:18 MarkupSafe==2.1.4 09:31:18 msgpack==1.0.7 09:31:18 multi_key_dict==2.0.3 09:31:18 munch==4.0.0 09:31:18 netaddr==0.10.1 09:31:18 netifaces==0.11.0 09:31:18 niet==1.4.2 09:31:18 nodeenv==1.8.0 09:31:18 oauth2client==4.1.3 09:31:18 oauthlib==3.2.2 09:31:18 openstacksdk==0.62.0 09:31:18 os-client-config==2.1.0 09:31:18 os-service-types==1.7.0 09:31:18 osc-lib==3.0.0 09:31:18 oslo.config==9.3.0 09:31:18 oslo.context==5.3.0 09:31:18 oslo.i18n==6.2.0 09:31:18 oslo.log==5.4.0 09:31:18 oslo.serialization==5.3.0 09:31:18 oslo.utils==7.0.0 09:31:18 packaging==23.2 09:31:18 pbr==6.0.0 09:31:18 platformdirs==4.1.0 09:31:18 prettytable==3.9.0 09:31:18 pyasn1==0.5.1 09:31:18 pyasn1-modules==0.3.0 09:31:18 pycparser==2.21 09:31:18 pygerrit2==2.0.15 09:31:18 PyGithub==2.1.1 09:31:18 pyinotify==0.9.6 09:31:18 PyJWT==2.8.0 09:31:18 PyNaCl==1.5.0 09:31:18 pyparsing==2.4.7 09:31:18 pyperclip==1.8.2 09:31:18 pyrsistent==0.20.0 09:31:18 python-cinderclient==9.4.0 09:31:18 python-dateutil==2.8.2 09:31:18 python-heatclient==3.4.0 09:31:18 python-jenkins==1.8.2 09:31:18 python-keystoneclient==5.3.0 09:31:18 python-magnumclient==4.3.0 09:31:18 python-novaclient==18.4.0 09:31:18 python-openstackclient==6.0.0 09:31:18 python-swiftclient==4.4.0 09:31:18 pytz==2023.3.post1 09:31:18 PyYAML==6.0.1 09:31:18 referencing==0.32.1 09:31:18 requests==2.31.0 09:31:18 requests-oauthlib==1.3.1 09:31:18 requestsexceptions==1.4.0 09:31:18 rfc3986==2.0.0 09:31:18 rpds-py==0.17.1 09:31:18 rsa==4.9 09:31:18 ruamel.yaml==0.18.5 09:31:18 ruamel.yaml.clib==0.2.8 09:31:18 s3transfer==0.10.0 09:31:18 simplejson==3.19.2 09:31:18 six==1.16.0 09:31:18 smmap==5.0.1 09:31:18 soupsieve==2.5 09:31:18 stevedore==5.1.0 09:31:18 tabulate==0.9.0 09:31:18 toml==0.10.2 09:31:18 tomlkit==0.12.3 09:31:18 tqdm==4.66.1 09:31:18 typing_extensions==4.9.0 09:31:18 tzdata==2023.4 09:31:18 urllib3==1.26.18 09:31:18 virtualenv==20.25.0 09:31:18 wcwidth==0.2.13 09:31:18 websocket-client==1.7.0 09:31:18 wrapt==1.16.0 09:31:18 xdg==6.0.0 09:31:18 xmltodict==0.13.0 09:31:18 yq==3.2.3 09:31:18 [EnvInject] - Injecting environment variables from a build step. 09:31:18 [EnvInject] - Injecting as environment variables the properties content 09:31:18 SET_JDK_VERSION=openjdk17 09:31:18 GIT_URL="git://cloud.onap.org/mirror" 09:31:18 09:31:18 [EnvInject] - Variables injected successfully. 09:31:18 [policy-pap-master-project-csit-verify-pap] $ /bin/sh /tmp/jenkins6846678910951635552.sh 09:31:18 ---> update-java-alternatives.sh 09:31:18 ---> Updating Java version 09:31:18 ---> Ubuntu/Debian system detected 09:31:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 09:31:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 09:31:18 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 09:31:18 openjdk version "17.0.4" 2022-07-19 09:31:18 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 09:31:18 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 09:31:18 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 09:31:18 [EnvInject] - Injecting environment variables from a build step. 09:31:18 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 09:31:18 [EnvInject] - Variables injected successfully. 09:31:18 [policy-pap-master-project-csit-verify-pap] $ /bin/sh -xe /tmp/jenkins1501664030327391999.sh 09:31:18 + /w/workspace/policy-pap-master-project-csit-verify-pap/csit/run-project-csit.sh pap 09:31:18 + set +u 09:31:18 + save_set 09:31:18 + RUN_CSIT_SAVE_SET=ehxB 09:31:18 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 09:31:18 + '[' 1 -eq 0 ']' 09:31:18 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 09:31:18 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 09:31:18 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 09:31:18 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts 09:31:18 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts 09:31:18 + export ROBOT_VARIABLES= 09:31:18 + ROBOT_VARIABLES= 09:31:18 + export PROJECT=pap 09:31:18 + PROJECT=pap 09:31:18 + cd /w/workspace/policy-pap-master-project-csit-verify-pap 09:31:18 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 09:31:18 + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 09:31:18 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh 09:31:18 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh ']' 09:31:18 + relax_set 09:31:18 + set +e 09:31:18 + set +o pipefail 09:31:18 + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh 09:31:18 ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 09:31:18 +++ mktemp -d 09:31:18 ++ ROBOT_VENV=/tmp/tmp.lkXiTNbz72 09:31:18 ++ echo ROBOT_VENV=/tmp/tmp.lkXiTNbz72 09:31:18 +++ python3 --version 09:31:18 ++ echo 'Python version is: Python 3.6.9' 09:31:18 Python version is: Python 3.6.9 09:31:18 ++ python3 -m venv --clear /tmp/tmp.lkXiTNbz72 09:31:20 ++ source /tmp/tmp.lkXiTNbz72/bin/activate 09:31:20 +++ deactivate nondestructive 09:31:20 +++ '[' -n '' ']' 09:31:20 +++ '[' -n '' ']' 09:31:20 +++ '[' -n /bin/bash -o -n '' ']' 09:31:20 +++ hash -r 09:31:20 +++ '[' -n '' ']' 09:31:20 +++ unset VIRTUAL_ENV 09:31:20 +++ '[' '!' nondestructive = nondestructive ']' 09:31:20 +++ VIRTUAL_ENV=/tmp/tmp.lkXiTNbz72 09:31:20 +++ export VIRTUAL_ENV 09:31:20 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 09:31:20 +++ PATH=/tmp/tmp.lkXiTNbz72/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 09:31:20 +++ export PATH 09:31:20 +++ '[' -n '' ']' 09:31:20 +++ '[' -z '' ']' 09:31:20 +++ _OLD_VIRTUAL_PS1= 09:31:20 +++ '[' 'x(tmp.lkXiTNbz72) ' '!=' x ']' 09:31:20 +++ PS1='(tmp.lkXiTNbz72) ' 09:31:20 +++ export PS1 09:31:20 +++ '[' -n /bin/bash -o -n '' ']' 09:31:20 +++ hash -r 09:31:20 ++ set -exu 09:31:20 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 09:31:23 ++ echo 'Installing Python Requirements' 09:31:23 Installing Python Requirements 09:31:23 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/pylibs.txt 09:31:41 ++ python3 -m pip -qq freeze 09:31:41 bcrypt==4.0.1 09:31:41 beautifulsoup4==4.12.3 09:31:41 bitarray==2.9.2 09:31:41 certifi==2023.11.17 09:31:41 cffi==1.15.1 09:31:41 charset-normalizer==2.0.12 09:31:41 cryptography==40.0.2 09:31:41 decorator==5.1.1 09:31:41 elasticsearch==7.17.9 09:31:41 elasticsearch-dsl==7.4.1 09:31:41 enum34==1.1.10 09:31:41 idna==3.6 09:31:41 importlib-resources==5.4.0 09:31:41 ipaddr==2.2.0 09:31:41 isodate==0.6.1 09:31:41 jmespath==0.10.0 09:31:41 jsonpatch==1.32 09:31:41 jsonpath-rw==1.4.0 09:31:41 jsonpointer==2.3 09:31:41 lxml==5.1.0 09:31:41 netaddr==0.8.0 09:31:41 netifaces==0.11.0 09:31:41 odltools==0.1.28 09:31:41 paramiko==3.4.0 09:31:41 pkg_resources==0.0.0 09:31:41 ply==3.11 09:31:41 pyang==2.6.0 09:31:41 pyangbind==0.8.1 09:31:41 pycparser==2.21 09:31:41 pyhocon==0.3.60 09:31:41 PyNaCl==1.5.0 09:31:41 pyparsing==3.1.1 09:31:41 python-dateutil==2.8.2 09:31:41 regex==2023.8.8 09:31:41 requests==2.27.1 09:31:41 robotframework==6.1.1 09:31:41 robotframework-httplibrary==0.4.2 09:31:41 robotframework-pythonlibcore==3.0.0 09:31:41 robotframework-requests==0.9.4 09:31:41 robotframework-selenium2library==3.0.0 09:31:41 robotframework-seleniumlibrary==5.1.3 09:31:41 robotframework-sshlibrary==3.8.0 09:31:41 scapy==2.5.0 09:31:41 scp==0.14.5 09:31:41 selenium==3.141.0 09:31:41 six==1.16.0 09:31:41 soupsieve==2.3.2.post1 09:31:41 urllib3==1.26.18 09:31:41 waitress==2.0.0 09:31:41 WebOb==1.8.7 09:31:41 WebTest==3.0.0 09:31:41 zipp==3.6.0 09:31:41 ++ mkdir -p /tmp/tmp.lkXiTNbz72/src/onap 09:31:41 ++ rm -rf /tmp/tmp.lkXiTNbz72/src/onap/testsuite 09:31:41 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 09:31:48 ++ echo 'Installing python confluent-kafka library' 09:31:48 Installing python confluent-kafka library 09:31:48 ++ python3 -m pip install -qq confluent-kafka 09:31:50 ++ echo 'Uninstall docker-py and reinstall docker.' 09:31:50 Uninstall docker-py and reinstall docker. 09:31:50 ++ python3 -m pip uninstall -y -qq docker 09:31:50 ++ python3 -m pip install -U -qq docker 09:31:51 ++ python3 -m pip -qq freeze 09:31:52 bcrypt==4.0.1 09:31:52 beautifulsoup4==4.12.3 09:31:52 bitarray==2.9.2 09:31:52 certifi==2023.11.17 09:31:52 cffi==1.15.1 09:31:52 charset-normalizer==2.0.12 09:31:52 confluent-kafka==2.3.0 09:31:52 cryptography==40.0.2 09:31:52 decorator==5.1.1 09:31:52 deepdiff==5.7.0 09:31:52 dnspython==2.2.1 09:31:52 docker==5.0.3 09:31:52 elasticsearch==7.17.9 09:31:52 elasticsearch-dsl==7.4.1 09:31:52 enum34==1.1.10 09:31:52 future==0.18.3 09:31:52 idna==3.6 09:31:52 importlib-resources==5.4.0 09:31:52 ipaddr==2.2.0 09:31:52 isodate==0.6.1 09:31:52 Jinja2==3.0.3 09:31:52 jmespath==0.10.0 09:31:52 jsonpatch==1.32 09:31:52 jsonpath-rw==1.4.0 09:31:52 jsonpointer==2.3 09:31:52 kafka-python==2.0.2 09:31:52 lxml==5.1.0 09:31:52 MarkupSafe==2.0.1 09:31:52 more-itertools==5.0.0 09:31:52 netaddr==0.8.0 09:31:52 netifaces==0.11.0 09:31:52 odltools==0.1.28 09:31:52 ordered-set==4.0.2 09:31:52 paramiko==3.4.0 09:31:52 pbr==6.0.0 09:31:52 pkg_resources==0.0.0 09:31:52 ply==3.11 09:31:52 protobuf==3.19.6 09:31:52 pyang==2.6.0 09:31:52 pyangbind==0.8.1 09:31:52 pycparser==2.21 09:31:52 pyhocon==0.3.60 09:31:52 PyNaCl==1.5.0 09:31:52 pyparsing==3.1.1 09:31:52 python-dateutil==2.8.2 09:31:52 PyYAML==6.0.1 09:31:52 regex==2023.8.8 09:31:52 requests==2.27.1 09:31:52 robotframework==6.1.1 09:31:52 robotframework-httplibrary==0.4.2 09:31:52 robotframework-onap==0.6.0.dev105 09:31:52 robotframework-pythonlibcore==3.0.0 09:31:52 robotframework-requests==0.9.4 09:31:52 robotframework-selenium2library==3.0.0 09:31:52 robotframework-seleniumlibrary==5.1.3 09:31:52 robotframework-sshlibrary==3.8.0 09:31:52 robotlibcore-temp==1.0.2 09:31:52 scapy==2.5.0 09:31:52 scp==0.14.5 09:31:52 selenium==3.141.0 09:31:52 six==1.16.0 09:31:52 soupsieve==2.3.2.post1 09:31:52 urllib3==1.26.18 09:31:52 waitress==2.0.0 09:31:52 WebOb==1.8.7 09:31:52 websocket-client==1.3.1 09:31:52 WebTest==3.0.0 09:31:52 zipp==3.6.0 09:31:52 ++ uname 09:31:52 ++ grep -q Linux 09:31:52 ++ sudo apt-get -y -qq install libxml2-utils 09:31:52 + load_set 09:31:52 + _setopts=ehuxB 09:31:52 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 09:31:52 ++ tr : ' ' 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o braceexpand 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o hashall 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o interactive-comments 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o nounset 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o xtrace 09:31:52 ++ echo ehuxB 09:31:52 ++ sed 's/./& /g' 09:31:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:31:52 + set +e 09:31:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:31:52 + set +h 09:31:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:31:52 + set +u 09:31:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:31:52 + set +x 09:31:52 + source_safely /tmp/tmp.lkXiTNbz72/bin/activate 09:31:52 + '[' -z /tmp/tmp.lkXiTNbz72/bin/activate ']' 09:31:52 + relax_set 09:31:52 + set +e 09:31:52 + set +o pipefail 09:31:52 + . /tmp/tmp.lkXiTNbz72/bin/activate 09:31:52 ++ deactivate nondestructive 09:31:52 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ']' 09:31:52 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 09:31:52 ++ export PATH 09:31:52 ++ unset _OLD_VIRTUAL_PATH 09:31:52 ++ '[' -n '' ']' 09:31:52 ++ '[' -n /bin/bash -o -n '' ']' 09:31:52 ++ hash -r 09:31:52 ++ '[' -n '' ']' 09:31:52 ++ unset VIRTUAL_ENV 09:31:52 ++ '[' '!' nondestructive = nondestructive ']' 09:31:52 ++ VIRTUAL_ENV=/tmp/tmp.lkXiTNbz72 09:31:52 ++ export VIRTUAL_ENV 09:31:52 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 09:31:52 ++ PATH=/tmp/tmp.lkXiTNbz72/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin 09:31:52 ++ export PATH 09:31:52 ++ '[' -n '' ']' 09:31:52 ++ '[' -z '' ']' 09:31:52 ++ _OLD_VIRTUAL_PS1='(tmp.lkXiTNbz72) ' 09:31:52 ++ '[' 'x(tmp.lkXiTNbz72) ' '!=' x ']' 09:31:52 ++ PS1='(tmp.lkXiTNbz72) (tmp.lkXiTNbz72) ' 09:31:52 ++ export PS1 09:31:52 ++ '[' -n /bin/bash -o -n '' ']' 09:31:52 ++ hash -r 09:31:52 + load_set 09:31:52 + _setopts=hxB 09:31:52 ++ echo braceexpand:hashall:interactive-comments:xtrace 09:31:52 ++ tr : ' ' 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o braceexpand 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o hashall 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o interactive-comments 09:31:52 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:31:52 + set +o xtrace 09:31:52 ++ echo hxB 09:31:52 ++ sed 's/./& /g' 09:31:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:31:52 + set +h 09:31:52 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:31:52 + set +x 09:31:52 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests 09:31:52 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests 09:31:52 + export TEST_OPTIONS= 09:31:52 + TEST_OPTIONS= 09:31:52 ++ mktemp -d 09:31:52 + WORKDIR=/tmp/tmp.wqeCzjSW34 09:31:52 + cd /tmp/tmp.wqeCzjSW34 09:31:52 + docker login -u docker -p docker nexus3.onap.org:10001 09:31:52 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 09:31:52 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 09:31:52 Configure a credential helper to remove this warning. See 09:31:52 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 09:31:52 09:31:52 Login Succeeded 09:31:52 + SETUP=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 09:31:52 + '[' -f /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' 09:31:52 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh' 09:31:52 Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 09:31:52 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 09:31:52 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' 09:31:52 + relax_set 09:31:52 + set +e 09:31:52 + set +o pipefail 09:31:52 + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh 09:31:52 ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/node-templates.sh 09:31:52 +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 09:31:52 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-verify-pap/.gitreview 09:31:52 +++ GERRIT_BRANCH=master 09:31:52 +++ echo GERRIT_BRANCH=master 09:31:52 GERRIT_BRANCH=master 09:31:52 +++ rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models 09:31:52 +++ mkdir /w/workspace/policy-pap-master-project-csit-verify-pap/models 09:31:52 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-verify-pap/models 09:31:52 Cloning into '/w/workspace/policy-pap-master-project-csit-verify-pap/models'... 09:31:54 +++ export DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies 09:31:54 +++ DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies 09:31:54 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 09:31:54 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 09:31:54 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 09:31:54 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 09:31:54 ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/compose/start-compose.sh apex-pdp --grafana 09:31:54 +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 09:31:54 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose 09:31:54 +++ grafana=false 09:31:54 +++ gui=false 09:31:54 +++ [[ 2 -gt 0 ]] 09:31:54 +++ key=apex-pdp 09:31:54 +++ case $key in 09:31:54 +++ echo apex-pdp 09:31:54 apex-pdp 09:31:54 +++ component=apex-pdp 09:31:54 +++ shift 09:31:54 +++ [[ 1 -gt 0 ]] 09:31:54 +++ key=--grafana 09:31:54 +++ case $key in 09:31:54 +++ grafana=true 09:31:54 +++ shift 09:31:54 +++ [[ 0 -gt 0 ]] 09:31:54 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose 09:31:54 +++ echo 'Configuring docker compose...' 09:31:54 Configuring docker compose... 09:31:54 +++ source export-ports.sh 09:31:54 +++ source get-versions.sh 09:31:56 +++ '[' -z pap ']' 09:31:56 +++ '[' -n apex-pdp ']' 09:31:56 +++ '[' apex-pdp == logs ']' 09:31:56 +++ '[' true = true ']' 09:31:56 +++ echo 'Starting apex-pdp application with Grafana' 09:31:56 Starting apex-pdp application with Grafana 09:31:56 +++ docker-compose up -d apex-pdp grafana 09:31:57 Creating network "compose_default" with the default driver 09:31:57 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 09:31:57 latest: Pulling from prom/prometheus 09:32:01 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 09:32:01 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 09:32:01 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 09:32:01 latest: Pulling from grafana/grafana 09:32:06 Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 09:32:06 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 09:32:06 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 09:32:06 10.10.2: Pulling from mariadb 09:32:12 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 09:32:12 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 09:32:12 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 09:32:13 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator 09:32:17 Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 09:32:17 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT 09:32:17 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 09:32:17 latest: Pulling from confluentinc/cp-zookeeper 09:32:26 Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 09:32:26 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 09:32:26 Pulling kafka (confluentinc/cp-kafka:latest)... 09:32:27 latest: Pulling from confluentinc/cp-kafka 09:32:29 Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 09:32:29 Status: Downloaded newer image for confluentinc/cp-kafka:latest 09:32:29 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 09:32:29 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator 09:32:33 Digest: sha256:eb47623eeab9aad8524ecc877b6708ae74b57f9f3cfe77554ad0d1521491cb5d 09:32:33 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT 09:32:33 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 09:32:33 3.1.1-SNAPSHOT: Pulling from onap/policy-api 09:32:39 Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e 09:32:39 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT 09:32:39 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.0)... 09:32:39 3.1.0: Pulling from onap/policy-pap 09:32:42 Digest: sha256:ff420a18fdd0393b657dcd1ae9e545437067fe5610606e3999888c21302a6231 09:32:42 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.0 09:32:42 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 09:32:43 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp 09:32:50 Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b 09:32:50 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT 09:32:50 Creating prometheus ... 09:32:50 Creating simulator ... 09:32:50 Creating compose_zookeeper_1 ... 09:32:50 Creating mariadb ... 09:32:57 Creating compose_zookeeper_1 ... done 09:32:57 Creating kafka ... 09:32:58 Creating kafka ... done 09:32:59 Creating mariadb ... done 09:32:59 Creating policy-db-migrator ... 09:33:00 Creating policy-db-migrator ... done 09:33:00 Creating policy-api ... 09:33:01 Creating policy-api ... done 09:33:01 Creating policy-pap ... 09:33:02 Creating policy-pap ... done 09:33:03 Creating simulator ... done 09:33:03 Creating policy-apex-pdp ... 09:33:04 Creating policy-apex-pdp ... done 09:33:05 Creating prometheus ... done 09:33:05 Creating grafana ... 09:33:06 Creating grafana ... done 09:33:06 +++ echo 'Prometheus server: http://localhost:30259' 09:33:06 Prometheus server: http://localhost:30259 09:33:06 +++ echo 'Grafana server: http://localhost:30269' 09:33:06 Grafana server: http://localhost:30269 09:33:06 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap 09:33:06 ++ sleep 10 09:33:16 ++ unset http_proxy https_proxy 09:33:16 ++ bash /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 09:33:16 Waiting for REST to come up on localhost port 30003... 09:33:16 NAMES STATUS 09:33:16 grafana Up 10 seconds 09:33:16 policy-apex-pdp Up 12 seconds 09:33:16 policy-pap Up 14 seconds 09:33:16 policy-api Up 15 seconds 09:33:16 kafka Up 18 seconds 09:33:16 mariadb Up 17 seconds 09:33:16 compose_zookeeper_1 Up 19 seconds 09:33:16 simulator Up 13 seconds 09:33:16 prometheus Up 11 seconds 09:33:21 NAMES STATUS 09:33:21 grafana Up 15 seconds 09:33:21 policy-apex-pdp Up 17 seconds 09:33:21 policy-pap Up 19 seconds 09:33:21 policy-api Up 20 seconds 09:33:21 kafka Up 23 seconds 09:33:21 mariadb Up 22 seconds 09:33:21 compose_zookeeper_1 Up 24 seconds 09:33:21 simulator Up 18 seconds 09:33:21 prometheus Up 16 seconds 09:33:26 NAMES STATUS 09:33:26 grafana Up 20 seconds 09:33:26 policy-apex-pdp Up 22 seconds 09:33:26 policy-pap Up 24 seconds 09:33:26 policy-api Up 25 seconds 09:33:26 kafka Up 28 seconds 09:33:26 mariadb Up 27 seconds 09:33:26 compose_zookeeper_1 Up 29 seconds 09:33:26 simulator Up 23 seconds 09:33:26 prometheus Up 21 seconds 09:33:31 NAMES STATUS 09:33:31 grafana Up 25 seconds 09:33:31 policy-apex-pdp Up 27 seconds 09:33:31 policy-pap Up 29 seconds 09:33:31 policy-api Up 30 seconds 09:33:31 kafka Up 33 seconds 09:33:31 mariadb Up 32 seconds 09:33:31 compose_zookeeper_1 Up 34 seconds 09:33:31 simulator Up 28 seconds 09:33:31 prometheus Up 26 seconds 09:33:36 NAMES STATUS 09:33:36 grafana Up 30 seconds 09:33:36 policy-apex-pdp Up 32 seconds 09:33:36 policy-pap Up 34 seconds 09:33:36 policy-api Up 35 seconds 09:33:36 kafka Up 38 seconds 09:33:36 mariadb Up 37 seconds 09:33:36 compose_zookeeper_1 Up 39 seconds 09:33:36 simulator Up 33 seconds 09:33:36 prometheus Up 31 seconds 09:33:36 ++ export 'SUITES=pap-test.robot 09:33:36 pap-slas.robot' 09:33:36 ++ SUITES='pap-test.robot 09:33:36 pap-slas.robot' 09:33:36 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 09:33:36 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' 09:33:36 + load_set 09:33:36 + _setopts=hxB 09:33:36 ++ echo braceexpand:hashall:interactive-comments:xtrace 09:33:36 ++ tr : ' ' 09:33:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:33:36 + set +o braceexpand 09:33:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:33:36 + set +o hashall 09:33:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:33:36 + set +o interactive-comments 09:33:36 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:33:36 + set +o xtrace 09:33:36 ++ echo hxB 09:33:36 ++ sed 's/./& /g' 09:33:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:33:36 + set +h 09:33:36 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:33:36 + set +x 09:33:36 + docker_stats 09:33:36 ++ uname -s 09:33:36 + tee /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 09:33:36 + '[' Linux == Darwin ']' 09:33:36 + sh -c 'top -bn1 | head -3' 09:33:37 top - 09:33:37 up 4 min, 0 users, load average: 3.01, 1.26, 0.50 09:33:37 Tasks: 206 total, 1 running, 130 sleeping, 0 stopped, 0 zombie 09:33:37 %Cpu(s): 14.1 us, 2.9 sy, 0.0 ni, 78.4 id, 4.6 wa, 0.0 hi, 0.1 si, 0.1 st 09:33:37 + echo 09:33:37 + sh -c 'free -h' 09:33:37 09:33:37 total used free shared buff/cache available 09:33:37 Mem: 31G 2.7G 22G 1.3M 6.7G 28G 09:33:37 Swap: 1.0G 0B 1.0G 09:33:37 + echo 09:33:37 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 09:33:37 09:33:37 NAMES STATUS 09:33:37 grafana Up 30 seconds 09:33:37 policy-apex-pdp Up 32 seconds 09:33:37 policy-pap Up 34 seconds 09:33:37 policy-api Up 35 seconds 09:33:37 kafka Up 38 seconds 09:33:37 mariadb Up 37 seconds 09:33:37 compose_zookeeper_1 Up 39 seconds 09:33:37 simulator Up 33 seconds 09:33:37 prometheus Up 31 seconds 09:33:37 + echo 09:33:37 + docker stats --no-stream 09:33:37 09:33:39 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 09:33:39 267a30c9e410 grafana 0.02% 53.07MiB / 31.41GiB 0.16% 18.8kB / 3.47kB 0B / 23.9MB 15 09:33:39 8be7b0a30511 policy-apex-pdp 1.24% 190.1MiB / 31.41GiB 0.59% 7.12kB / 6.82kB 0B / 0B 48 09:33:39 5a28db364f41 policy-pap 2.53% 524.1MiB / 31.41GiB 1.63% 26.8kB / 28.8kB 0B / 181MB 61 09:33:39 d306d91f2d91 policy-api 0.09% 512.5MiB / 31.41GiB 1.59% 1e+03kB / 710kB 0B / 0B 52 09:33:39 fb14a1f79088 kafka 0.61% 385.3MiB / 31.41GiB 1.20% 67.4kB / 68.5kB 0B / 508kB 81 09:33:39 7a46a0a4c8f2 mariadb 0.01% 101.4MiB / 31.41GiB 0.32% 996kB / 1.19MB 10.9MB / 68MB 38 09:33:39 83692c50eb02 compose_zookeeper_1 0.09% 100.9MiB / 31.41GiB 0.31% 54.1kB / 49.2kB 229kB / 414kB 60 09:33:39 1fdc3d3c0293 simulator 0.08% 124.2MiB / 31.41GiB 0.39% 1.06kB / 0B 0B / 0B 76 09:33:39 0206d248488d prometheus 0.00% 18.76MiB / 31.41GiB 0.06% 894B / 0B 0B / 0B 13 09:33:39 + echo 09:33:39 09:33:39 + cd /tmp/tmp.wqeCzjSW34 09:33:39 + echo 'Reading the testplan:' 09:33:39 Reading the testplan: 09:33:39 + echo 'pap-test.robot 09:33:39 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 09:33:39 + sed 's|^|/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/|' 09:33:39 pap-slas.robot' 09:33:39 + cat testplan.txt 09:33:39 /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot 09:33:39 /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot 09:33:39 ++ xargs 09:33:39 + SUITES='/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot' 09:33:39 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 09:33:39 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' 09:33:39 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 09:33:39 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates 09:33:39 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ...' 09:33:39 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ... 09:33:39 + relax_set 09:33:39 + set +e 09:33:39 + set +o pipefail 09:33:39 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot 09:33:39 ============================================================================== 09:33:39 pap 09:33:39 ============================================================================== 09:33:39 pap.Pap-Test 09:33:39 ============================================================================== 09:33:40 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 09:33:40 ------------------------------------------------------------------------------ 09:33:41 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 09:33:41 ------------------------------------------------------------------------------ 09:33:41 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 09:33:41 ------------------------------------------------------------------------------ 09:33:41 Healthcheck :: Verify policy pap health check | PASS | 09:33:41 ------------------------------------------------------------------------------ 09:34:02 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 09:34:02 ------------------------------------------------------------------------------ 09:34:02 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 09:34:02 ------------------------------------------------------------------------------ 09:34:03 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 09:34:03 ------------------------------------------------------------------------------ 09:34:03 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 09:34:03 ------------------------------------------------------------------------------ 09:34:03 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 09:34:03 ------------------------------------------------------------------------------ 09:34:03 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 09:34:03 ------------------------------------------------------------------------------ 09:34:04 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 09:34:04 ------------------------------------------------------------------------------ 09:34:04 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 09:34:04 ------------------------------------------------------------------------------ 09:34:04 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 09:34:04 ------------------------------------------------------------------------------ 09:34:04 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 09:34:04 ------------------------------------------------------------------------------ 09:34:04 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 09:34:04 ------------------------------------------------------------------------------ 09:34:05 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 09:34:05 ------------------------------------------------------------------------------ 09:34:05 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 09:34:05 ------------------------------------------------------------------------------ 09:34:25 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 09:34:25 pdpTypeC != pdpTypeA 09:34:25 ------------------------------------------------------------------------------ 09:34:25 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 09:34:25 ------------------------------------------------------------------------------ 09:34:25 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 09:34:25 ------------------------------------------------------------------------------ 09:34:26 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 09:34:26 ------------------------------------------------------------------------------ 09:34:26 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 09:34:26 ------------------------------------------------------------------------------ 09:34:26 pap.Pap-Test | FAIL | 09:34:26 22 tests, 21 passed, 1 failed 09:34:26 ============================================================================== 09:34:26 pap.Pap-Slas 09:34:26 ============================================================================== 09:35:26 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 09:35:26 ------------------------------------------------------------------------------ 09:35:26 pap.Pap-Slas | PASS | 09:35:26 8 tests, 8 passed, 0 failed 09:35:26 ============================================================================== 09:35:26 pap | FAIL | 09:35:26 30 tests, 29 passed, 1 failed 09:35:26 ============================================================================== 09:35:26 Output: /tmp/tmp.wqeCzjSW34/output.xml 09:35:26 Log: /tmp/tmp.wqeCzjSW34/log.html 09:35:26 Report: /tmp/tmp.wqeCzjSW34/report.html 09:35:26 + RESULT=1 09:35:26 + load_set 09:35:26 + _setopts=hxB 09:35:26 ++ echo braceexpand:hashall:interactive-comments:xtrace 09:35:26 ++ tr : ' ' 09:35:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:35:26 + set +o braceexpand 09:35:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:35:26 + set +o hashall 09:35:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:35:26 + set +o interactive-comments 09:35:26 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:35:26 + set +o xtrace 09:35:26 ++ echo hxB 09:35:26 ++ sed 's/./& /g' 09:35:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:35:26 + set +h 09:35:26 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:35:26 + set +x 09:35:26 + echo 'RESULT: 1' 09:35:26 RESULT: 1 09:35:26 + exit 1 09:35:26 + on_exit 09:35:26 + rc=1 09:35:26 + [[ -n /w/workspace/policy-pap-master-project-csit-verify-pap ]] 09:35:26 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 09:35:26 NAMES STATUS 09:35:26 grafana Up 2 minutes 09:35:26 policy-apex-pdp Up 2 minutes 09:35:26 policy-pap Up 2 minutes 09:35:26 policy-api Up 2 minutes 09:35:26 kafka Up 2 minutes 09:35:26 mariadb Up 2 minutes 09:35:26 compose_zookeeper_1 Up 2 minutes 09:35:26 simulator Up 2 minutes 09:35:26 prometheus Up 2 minutes 09:35:26 + docker_stats 09:35:26 ++ uname -s 09:35:26 + '[' Linux == Darwin ']' 09:35:26 + sh -c 'top -bn1 | head -3' 09:35:26 top - 09:35:26 up 6 min, 0 users, load average: 0.56, 0.92, 0.45 09:35:26 Tasks: 195 total, 1 running, 128 sleeping, 0 stopped, 0 zombie 09:35:26 %Cpu(s): 11.1 us, 2.1 sy, 0.0 ni, 83.4 id, 3.3 wa, 0.0 hi, 0.1 si, 0.1 st 09:35:26 + echo 09:35:26 09:35:26 + sh -c 'free -h' 09:35:26 total used free shared buff/cache available 09:35:26 Mem: 31G 2.9G 21G 1.3M 6.7G 28G 09:35:26 Swap: 1.0G 0B 1.0G 09:35:26 + echo 09:35:26 09:35:26 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 09:35:26 NAMES STATUS 09:35:26 grafana Up 2 minutes 09:35:26 policy-apex-pdp Up 2 minutes 09:35:26 policy-pap Up 2 minutes 09:35:26 policy-api Up 2 minutes 09:35:26 kafka Up 2 minutes 09:35:26 mariadb Up 2 minutes 09:35:26 compose_zookeeper_1 Up 2 minutes 09:35:26 simulator Up 2 minutes 09:35:26 prometheus Up 2 minutes 09:35:26 + echo 09:35:26 09:35:26 + docker stats --no-stream 09:35:29 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 09:35:29 267a30c9e410 grafana 0.03% 56.48MiB / 31.41GiB 0.18% 20.1kB / 4.7kB 0B / 23.9MB 15 09:35:29 8be7b0a30511 policy-apex-pdp 0.49% 182.4MiB / 31.41GiB 0.57% 56.6kB / 91.1kB 0B / 0B 50 09:35:29 5a28db364f41 policy-pap 0.72% 569.1MiB / 31.41GiB 1.77% 2.33MB / 799kB 0B / 181MB 63 09:35:29 d306d91f2d91 policy-api 0.11% 583.9MiB / 31.41GiB 1.82% 2.49MB / 1.26MB 0B / 0B 53 09:35:29 fb14a1f79088 kafka 1.66% 403.1MiB / 31.41GiB 1.25% 238kB / 212kB 0B / 606kB 83 09:35:29 7a46a0a4c8f2 mariadb 0.02% 102.8MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 10.9MB / 68.4MB 28 09:35:29 83692c50eb02 compose_zookeeper_1 0.10% 101.8MiB / 31.41GiB 0.32% 57kB / 50.7kB 229kB / 414kB 60 09:35:29 1fdc3d3c0293 simulator 0.09% 124.1MiB / 31.41GiB 0.39% 1.41kB / 0B 0B / 0B 76 09:35:29 0206d248488d prometheus 0.00% 25.75MiB / 31.41GiB 0.08% 167kB / 10.8kB 0B / 0B 14 09:35:29 + echo 09:35:29 09:35:29 + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh 09:35:29 + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh ']' 09:35:29 + relax_set 09:35:29 + set +e 09:35:29 + set +o pipefail 09:35:29 + . /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh 09:35:29 ++ echo 'Shut down started!' 09:35:29 Shut down started! 09:35:29 ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' 09:35:29 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose 09:35:29 ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose 09:35:29 ++ source export-ports.sh 09:35:29 ++ source get-versions.sh 09:35:31 ++ echo 'Collecting logs from docker compose containers...' 09:35:31 Collecting logs from docker compose containers... 09:35:31 ++ docker-compose logs 09:35:32 ++ cat docker_compose.log 09:35:32 Attaching to grafana, policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, kafka, mariadb, compose_zookeeper_1, simulator, prometheus 09:35:32 zookeeper_1 | ===> User 09:35:32 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:35:32 zookeeper_1 | ===> Configuring ... 09:35:32 zookeeper_1 | ===> Running preflight checks ... 09:35:32 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 09:35:32 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 09:35:32 zookeeper_1 | ===> Launching ... 09:35:32 zookeeper_1 | ===> Launching zookeeper ... 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,852] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,859] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,859] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,859] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,859] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,861] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,861] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,861] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,861] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,863] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,863] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,863] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,864] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,864] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,864] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,864] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,875] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@55b53d44 (org.apache.zookeeper.server.ServerMetrics) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,878] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,878] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,880] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,889] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,889] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619508079Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.61970393Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.61971998Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.61972393Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.61972717Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.61972998Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.6197326Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.61973598Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619740451Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619744901Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619748021Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619751221Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619755141Z level=info msg=Target target=[all] 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619760761Z level=info msg="Path Home" path=/usr/share/grafana 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619764691Z level=info msg="Path Data" path=/var/lib/grafana 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619767801Z level=info msg="Path Logs" path=/var/log/grafana 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619770831Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619774061Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 09:35:32 grafana | logger=settings t=2024-01-22T09:33:06.619777251Z level=info msg="App mode production" 09:35:32 grafana | logger=sqlstore t=2024-01-22T09:33:06.620106513Z level=info msg="Connecting to DB" dbtype=sqlite3 09:35:32 grafana | logger=sqlstore t=2024-01-22T09:33:06.620124104Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.620719898Z level=info msg="Starting DB migrations" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.621644327Z level=info msg="Executing migration" id="create migration_log table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.622468144Z level=info msg="Migration successfully executed" id="create migration_log table" duration=823.067µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.626599118Z level=info msg="Executing migration" id="create user table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.627317745Z level=info msg="Migration successfully executed" id="create user table" duration=718.787µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.630690453Z level=info msg="Executing migration" id="add unique index user.login" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.631441499Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=750.676µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.634920718Z level=info msg="Executing migration" id="add unique index user.email" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.635651315Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=730.007µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.640773807Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.641440473Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=666.696µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.649802754Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.650446149Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=643.415µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.653416964Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.656293248Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.880844ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.660330282Z level=info msg="Executing migration" id="create user table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.661054679Z level=info msg="Migration successfully executed" id="create user table v2" duration=724.926µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.664057604Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.66480016Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=742.306µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.667965806Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.668739562Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=774.406µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.672345693Z level=info msg="Executing migration" id="copy data_source v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.672753917Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=402.994µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.676560168Z level=info msg="Executing migration" id="Drop old table user_v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.677098533Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=530.965µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.681101817Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.682336717Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.23468ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.690560906Z level=info msg="Executing migration" id="Update user table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.690587246Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.27µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.694232327Z level=info msg="Executing migration" id="Add last_seen_at column to user" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.695335876Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.103099ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.698440803Z level=info msg="Executing migration" id="Add missing user data" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.698668075Z level=info msg="Migration successfully executed" id="Add missing user data" duration=227.361µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.701333017Z level=info msg="Executing migration" id="Add is_disabled column to user" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.702482006Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.149069ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.70529209Z level=info msg="Executing migration" id="Add index user.login/user.email" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.706020437Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=727.907µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.709663497Z level=info msg="Executing migration" id="Add is_service_account column to user" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.710840417Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.17674ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.713437308Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.723915787Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.475689ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.727787449Z level=info msg="Executing migration" id="create temp user table v1-7" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.728510216Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=719.757µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.731747523Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.732468689Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=720.846µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.736612594Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.737415421Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=805.237µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.741131801Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.741834027Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=704.716µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.745046235Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.745823801Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=780.706µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.749951016Z level=info msg="Executing migration" id="Update temp_user table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.749975306Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=25.11µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.752999171Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.753682268Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=679.067µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.756850124Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.757729761Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=879.517µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.768709084Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.769350769Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=641.555µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.772300904Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.77291518Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=614.446µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.775593972Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.779269733Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.672691ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.783128245Z level=info msg="Executing migration" id="create temp_user v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.783855971Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=727.286µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.786579344Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.787418531Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=838.317µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.790585698Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.791284293Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=704.085µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.794921734Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.79561244Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=690.326µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.798335253Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.79913291Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=796.977µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.803103803Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.803482386Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=377.563µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.811669245Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.812265811Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=601.866µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.814517269Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.814888743Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=366.364µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.817951728Z level=info msg="Executing migration" id="create star table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.818544723Z level=info msg="Migration successfully executed" id="create star table" duration=593.555µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.821262546Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.822013572Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=750.446µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.827298077Z level=info msg="Executing migration" id="create org table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.827998262Z level=info msg="Migration successfully executed" id="create org table v1" duration=699.945µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.832717162Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.833450388Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=732.936µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.836233102Z level=info msg="Executing migration" id="create org_user table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.836836627Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=604.235µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.841231874Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.842950879Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.721865ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.848188592Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.8490015Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=812.527µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.855779246Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.857154378Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.375512ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.860355415Z level=info msg="Executing migration" id="Update org table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.860385785Z level=info msg="Migration successfully executed" id="Update org table charset" duration=34.6µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.864126756Z level=info msg="Executing migration" id="Update org_user table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.864167557Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=41.641µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.866605607Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.86697509Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=389.763µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.87051206Z level=info msg="Executing migration" id="create dashboard table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.87161804Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.1052ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.875072539Z level=info msg="Executing migration" id="add index dashboard.account_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.875903205Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=830.356µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.880492754Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.881388602Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=895.438µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.884538598Z level=info msg="Executing migration" id="create dashboard_tag table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.885256615Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=717.567µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.8905822Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.892009651Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.426831ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.898031762Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.899331723Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.297461ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.902682371Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.910735829Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.054518ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.913867085Z level=info msg="Executing migration" id="create dashboard v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.914622471Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=754.186µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.918747946Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.919518293Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=770.407µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.922646749Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.923438685Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=791.816µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.929677119Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.930379404Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=693.955µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.936178683Z level=info msg="Executing migration" id="drop table dashboard_v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.93706882Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=887.857µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.940409448Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.94062252Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=220.972µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.944402342Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.947445248Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.042226ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.951477471Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.953269447Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.792866ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.956423903Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.959430488Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.004475ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.963015358Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.96433433Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.318862ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.972803681Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.976163359Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.361768ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.981958478Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,890] INFO (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,891] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,891] INFO Server environment:host.name=83692c50eb02 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,891] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.982949867Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=991.309µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.985817931Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.98695922Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.139069ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.99170358Z level=info msg="Executing migration" id="Update dashboard table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.991877942Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=78.321µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.995572372Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.995604513Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=33.151µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:06.998566248Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.000461864Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=1.891676ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.006233507Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.008170997Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.93705ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.010951966Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.013067878Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.115412ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.019514197Z level=info msg="Executing migration" id="Add column uid in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.021556487Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.04204ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.026917134Z level=info msg="Executing migration" id="Update uid column values in dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.027202027Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=284.593µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.029956316Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.030844326Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=887.8µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.03412046Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.035504434Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.384364ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.040154373Z level=info msg="Executing migration" id="Update dashboard title length" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.040231584Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=78.071µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.045222136Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.046094925Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=872.659µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.048707583Z level=info msg="Executing migration" id="create dashboard_provisioning" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.04941443Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=708.707µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.052086858Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.059347724Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.260336ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.065239966Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.065977144Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=736.768µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.069322488Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.070154398Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=831.83µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.073130839Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.074543383Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.406714ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.080355464Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.080838319Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=483.525µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.084901632Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.085851002Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=949.18µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.08856544Z level=info msg="Executing migration" id="Add check_sum column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.090748024Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.185444ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.095893108Z level=info msg="Executing migration" id="Add index for dashboard_title" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.096751146Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=857.648µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.103146723Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.103601039Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=448.616µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.106978374Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.107304977Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=325.883µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.110516331Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.111659513Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.142712ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.115520044Z level=info msg="Executing migration" id="Add isPublic for dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.11805335Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.532506ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.121262054Z level=info msg="Executing migration" id="create data_source table" 09:35:32 kafka | ===> User 09:35:32 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 09:35:32 kafka | ===> Configuring ... 09:35:32 kafka | Running in Zookeeper mode... 09:35:32 kafka | ===> Running preflight checks ... 09:35:32 kafka | ===> Check if /var/lib/kafka/data is writable ... 09:35:32 kafka | ===> Check if Zookeeper is healthy ... 09:35:32 kafka | [2024-01-22 09:33:01,957] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,958] INFO Client environment:host.name=fb14a1f79088 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,958] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,958] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,958] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,958] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,959] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,960] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,960] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,960] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,963] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:01,967] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:35:32 kafka | [2024-01-22 09:33:01,972] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:35:32 kafka | [2024-01-22 09:33:01,980] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:01,991] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:01,991] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:02,006] INFO Socket connection established, initiating session, client: /172.17.0.6:44224, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:02,032] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000037a550000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:02,152] INFO Session: 0x10000037a550000 closed (org.apache.zookeeper.ZooKeeper) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,892] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,893] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,894] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,895] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,896] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,896] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,896] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,897] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,897] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,897] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,898] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,898] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,898] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,900] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,900] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,901] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 09:35:32 kafka | [2024-01-22 09:33:02,152] INFO EventThread shut down for session: 0x10000037a550000 (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | Using log4j config /etc/kafka/log4j.properties 09:35:32 kafka | ===> Launching ... 09:35:32 kafka | ===> Launching kafka ... 09:35:32 kafka | [2024-01-22 09:33:02,770] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 09:35:32 kafka | [2024-01-22 09:33:03,053] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 09:35:32 kafka | [2024-01-22 09:33:03,118] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 09:35:32 kafka | [2024-01-22 09:33:03,120] INFO starting (kafka.server.KafkaServer) 09:35:32 kafka | [2024-01-22 09:33:03,120] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 09:35:32 kafka | [2024-01-22 09:33:03,134] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:host.name=fb14a1f79088 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,901] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,901] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:00,919] INFO Logging initialized @562ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,001] WARN o.e.j.s.ServletContextHandler@49c90a9c{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,001] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,017] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,053] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,053] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,054] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,057] WARN ServletContext@o.e.j.s.ServletContextHandler@49c90a9c{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,064] INFO Started o.e.j.s.ServletContextHandler@49c90a9c{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,076] INFO Started ServerConnector@723ca036{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,076] INFO Started @720ms (org.eclipse.jetty.server.Server) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,076] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,080] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,081] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,082] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,083] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,096] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,096] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,097] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,097] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,102] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,102] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,105] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,105] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,106] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,113] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,113] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,127] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 09:35:32 zookeeper_1 | [2024-01-22 09:33:01,128] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 09:35:32 zookeeper_1 | [2024-01-22 09:33:02,018] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,138] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,140] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) 09:35:32 kafka | [2024-01-22 09:33:03,144] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 09:35:32 kafka | [2024-01-22 09:33:03,149] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:03,151] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 09:35:32 kafka | [2024-01-22 09:33:03,153] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:03,160] INFO Socket connection established, initiating session, client: /172.17.0.6:44226, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:03,168] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x10000037a550001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 09:35:32 kafka | [2024-01-22 09:33:03,171] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 09:35:32 kafka | [2024-01-22 09:33:03,494] INFO Cluster ID = CJoIAc7kRTWMdkSfJOx8eQ (kafka.server.KafkaServer) 09:35:32 kafka | [2024-01-22 09:33:03,497] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 09:35:32 kafka | [2024-01-22 09:33:03,559] INFO KafkaConfig values: 09:35:32 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 09:35:32 kafka | alter.config.policy.class.name = null 09:35:32 kafka | alter.log.dirs.replication.quota.window.num = 11 09:35:32 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 09:35:32 kafka | authorizer.class.name = 09:35:32 kafka | auto.create.topics.enable = true 09:35:32 kafka | auto.include.jmx.reporter = true 09:35:32 kafka | auto.leader.rebalance.enable = true 09:35:32 kafka | background.threads = 10 09:35:32 kafka | broker.heartbeat.interval.ms = 2000 09:35:32 kafka | broker.id = 1 09:35:32 kafka | broker.id.generation.enable = true 09:35:32 kafka | broker.rack = null 09:35:32 kafka | broker.session.timeout.ms = 9000 09:35:32 kafka | client.quota.callback.class = null 09:35:32 kafka | compression.type = producer 09:35:32 kafka | connection.failed.authentication.delay.ms = 100 09:35:32 kafka | connections.max.idle.ms = 600000 09:35:32 kafka | connections.max.reauth.ms = 0 09:35:32 kafka | control.plane.listener.name = null 09:35:32 kafka | controlled.shutdown.enable = true 09:35:32 kafka | controlled.shutdown.max.retries = 3 09:35:32 kafka | controlled.shutdown.retry.backoff.ms = 5000 09:35:32 kafka | controller.listener.names = null 09:35:32 kafka | controller.quorum.append.linger.ms = 25 09:35:32 kafka | controller.quorum.election.backoff.max.ms = 1000 09:35:32 kafka | controller.quorum.election.timeout.ms = 1000 09:35:32 kafka | controller.quorum.fetch.timeout.ms = 2000 09:35:32 kafka | controller.quorum.request.timeout.ms = 2000 09:35:32 kafka | controller.quorum.retry.backoff.ms = 20 09:35:32 kafka | controller.quorum.voters = [] 09:35:32 kafka | controller.quota.window.num = 11 09:35:32 kafka | controller.quota.window.size.seconds = 1 09:35:32 kafka | controller.socket.timeout.ms = 30000 09:35:32 kafka | create.topic.policy.class.name = null 09:35:32 kafka | default.replication.factor = 1 09:35:32 kafka | delegation.token.expiry.check.interval.ms = 3600000 09:35:32 kafka | delegation.token.expiry.time.ms = 86400000 09:35:32 kafka | delegation.token.master.key = null 09:35:32 kafka | delegation.token.max.lifetime.ms = 604800000 09:35:32 kafka | delegation.token.secret.key = null 09:35:32 kafka | delete.records.purgatory.purge.interval.requests = 1 09:35:32 kafka | delete.topic.enable = true 09:35:32 kafka | early.start.listeners = null 09:35:32 kafka | fetch.max.bytes = 57671680 09:35:32 kafka | fetch.purgatory.purge.interval.requests = 1000 09:35:32 kafka | group.consumer.assignors = [] 09:35:32 kafka | group.consumer.heartbeat.interval.ms = 5000 09:35:32 kafka | group.consumer.max.heartbeat.interval.ms = 15000 09:35:32 kafka | group.consumer.max.session.timeout.ms = 60000 09:35:32 kafka | group.consumer.max.size = 2147483647 09:35:32 kafka | group.consumer.min.heartbeat.interval.ms = 5000 09:35:32 kafka | group.consumer.min.session.timeout.ms = 45000 09:35:32 kafka | group.consumer.session.timeout.ms = 45000 09:35:32 kafka | group.coordinator.new.enable = false 09:35:32 kafka | group.coordinator.threads = 1 09:35:32 kafka | group.initial.rebalance.delay.ms = 3000 09:35:32 kafka | group.max.session.timeout.ms = 1800000 09:35:32 kafka | group.max.size = 2147483647 09:35:32 kafka | group.min.session.timeout.ms = 6000 09:35:32 kafka | initial.broker.registration.timeout.ms = 60000 09:35:32 kafka | inter.broker.listener.name = PLAINTEXT 09:35:32 kafka | inter.broker.protocol.version = 3.5-IV2 09:35:32 kafka | kafka.metrics.polling.interval.secs = 10 09:35:32 kafka | kafka.metrics.reporters = [] 09:35:32 kafka | leader.imbalance.check.interval.seconds = 300 09:35:32 kafka | leader.imbalance.per.broker.percentage = 10 09:35:32 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 09:35:32 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 09:35:32 kafka | log.cleaner.backoff.ms = 15000 09:35:32 kafka | log.cleaner.dedupe.buffer.size = 134217728 09:35:32 kafka | log.cleaner.delete.retention.ms = 86400000 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.122121062Z level=info msg="Migration successfully executed" id="create data_source table" duration=855.758µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.124918182Z level=info msg="Executing migration" id="add index data_source.account_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.125676561Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=758.348µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.129783843Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.130704942Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=920.859µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.133435451Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.135676505Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=2.240974ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.14276926Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.144118633Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.349333ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.149380579Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.162720189Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=13.33992ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.165126404Z level=info msg="Executing migration" id="create data_source table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.165808901Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=682.537µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.170044445Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.171008196Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=965.451µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.173479242Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.174346551Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=866.589µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.177080739Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.177875737Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=794.228µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.185476417Z level=info msg="Executing migration" id="Add column with_credentials" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.188734431Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.261874ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.192754273Z level=info msg="Executing migration" id="Add secure json data column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.195808196Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.050183ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.198739597Z level=info msg="Executing migration" id="Update data_source table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.198769517Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=29.68µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.203439985Z level=info msg="Executing migration" id="Update initial version to 1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.203633067Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=193.852µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.207792012Z level=info msg="Executing migration" id="Add read_only data column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.21049469Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.702238ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.213548652Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.213870605Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=321.383µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.216931717Z level=info msg="Executing migration" id="Update json_data with nulls" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.217087629Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=156.002µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.221590966Z level=info msg="Executing migration" id="Add uid column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.225363826Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.77159ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.234323089Z level=info msg="Executing migration" id="Update uid value" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.234778324Z level=info msg="Migration successfully executed" id="Update uid value" duration=304.453µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.237193199Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.23810997Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=916.181µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.242425264Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.24386577Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.439596ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.248502908Z level=info msg="Executing migration" id="create api_key table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.249372928Z level=info msg="Migration successfully executed" id="create api_key table" duration=869.38µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.25245541Z level=info msg="Executing migration" id="add index api_key.account_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.253342499Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=881.129µs 09:35:32 kafka | log.cleaner.enable = true 09:35:32 kafka | log.cleaner.io.buffer.load.factor = 0.9 09:35:32 kafka | log.cleaner.io.buffer.size = 524288 09:35:32 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 09:35:32 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 09:35:32 kafka | log.cleaner.min.cleanable.ratio = 0.5 09:35:32 kafka | log.cleaner.min.compaction.lag.ms = 0 09:35:32 kafka | log.cleaner.threads = 1 09:35:32 kafka | log.cleanup.policy = [delete] 09:35:32 kafka | log.dir = /tmp/kafka-logs 09:35:32 kafka | log.dirs = /var/lib/kafka/data 09:35:32 kafka | log.flush.interval.messages = 9223372036854775807 09:35:32 kafka | log.flush.interval.ms = null 09:35:32 kafka | log.flush.offset.checkpoint.interval.ms = 60000 09:35:32 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 09:35:32 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 09:35:32 kafka | log.index.interval.bytes = 4096 09:35:32 kafka | log.index.size.max.bytes = 10485760 09:35:32 kafka | log.message.downconversion.enable = true 09:35:32 kafka | log.message.format.version = 3.0-IV1 09:35:32 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 09:35:32 kafka | log.message.timestamp.type = CreateTime 09:35:32 kafka | log.preallocate = false 09:35:32 kafka | log.retention.bytes = -1 09:35:32 kafka | log.retention.check.interval.ms = 300000 09:35:32 kafka | log.retention.hours = 168 09:35:32 kafka | log.retention.minutes = null 09:35:32 kafka | log.retention.ms = null 09:35:32 kafka | log.roll.hours = 168 09:35:32 kafka | log.roll.jitter.hours = 0 09:35:32 kafka | log.roll.jitter.ms = null 09:35:32 kafka | log.roll.ms = null 09:35:32 kafka | log.segment.bytes = 1073741824 09:35:32 kafka | log.segment.delete.delay.ms = 60000 09:35:32 kafka | max.connection.creation.rate = 2147483647 09:35:32 kafka | max.connections = 2147483647 09:35:32 kafka | max.connections.per.ip = 2147483647 09:35:32 kafka | max.connections.per.ip.overrides = 09:35:32 kafka | max.incremental.fetch.session.cache.slots = 1000 09:35:32 kafka | message.max.bytes = 1048588 09:35:32 kafka | metadata.log.dir = null 09:35:32 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 09:35:32 kafka | metadata.log.max.snapshot.interval.ms = 3600000 09:35:32 kafka | metadata.log.segment.bytes = 1073741824 09:35:32 kafka | metadata.log.segment.min.bytes = 8388608 09:35:32 kafka | metadata.log.segment.ms = 604800000 09:35:32 kafka | metadata.max.idle.interval.ms = 500 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.257677695Z level=info msg="Executing migration" id="add index api_key.key" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.258741086Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.058251ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.261516415Z level=info msg="Executing migration" id="add index api_key.account_id_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.262571696Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.050831ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.269277897Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.270730461Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.452514ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.277586604Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.278740176Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.154762ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.281293823Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.282133541Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=837.618µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.28482368Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.294304371Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.480351ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.298246942Z level=info msg="Executing migration" id="create api_key table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.298860368Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=613.566µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.301420505Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.302141752Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=725.337µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.30845077Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.310006816Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.557936ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.317441244Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.318458304Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.0166ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.321016102Z level=info msg="Executing migration" id="copy api_key v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.321386925Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=370.733µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.323949342Z level=info msg="Executing migration" id="Drop old table api_key_v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.324505798Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=556.256µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.328069396Z level=info msg="Executing migration" id="Update api_key table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.328096557Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=28.171µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.332032248Z level=info msg="Executing migration" id="Add expires to api_key table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.334973419Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.940871ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.338506246Z level=info msg="Executing migration" id="Add service account foreign key" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.340963612Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.457766ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.345879844Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.346067265Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=187.881µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.350396802Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.353025579Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.628287ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.355799849Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.358361756Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.562887ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.361913873Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.362729751Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=815.368µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.365686643Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.366458161Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=771.227µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.369366771Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.370343852Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=976.591µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.374427824Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.375309754Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=881.63µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.378522068Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.379436688Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=914.579µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.385220898Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.387450362Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=2.228374ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.39576627Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.395859601Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=93.931µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.39964694Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.399673141Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.85µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.402145017Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.407058829Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.908812ms 09:35:32 mariadb | 2024-01-22 09:32:59+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 09:35:32 mariadb | 2024-01-22 09:32:59+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 09:35:32 mariadb | 2024-01-22 09:32:59+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 09:35:32 mariadb | 2024-01-22 09:32:59+00:00 [Note] [Entrypoint]: Initializing database files 09:35:32 mariadb | 2024-01-22 9:32:59 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 09:35:32 mariadb | 2024-01-22 9:32:59 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 09:35:32 mariadb | 2024-01-22 9:32:59 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 09:35:32 mariadb | 09:35:32 mariadb | 09:35:32 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 09:35:32 mariadb | To do so, start the server, then issue the following command: 09:35:32 mariadb | 09:35:32 mariadb | '/usr/bin/mysql_secure_installation' 09:35:32 mariadb | 09:35:32 mariadb | which will also give you the option of removing the test 09:35:32 mariadb | databases and anonymous user created by default. This is 09:35:32 mariadb | strongly recommended for production servers. 09:35:32 mariadb | 09:35:32 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 09:35:32 mariadb | 09:35:32 mariadb | Please report any problems at https://mariadb.org/jira 09:35:32 mariadb | 09:35:32 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 09:35:32 mariadb | 09:35:32 mariadb | Consider joining MariaDB's strong and vibrant community: 09:35:32 mariadb | https://mariadb.org/get-involved/ 09:35:32 mariadb | 09:35:32 mariadb | 2024-01-22 09:33:00+00:00 [Note] [Entrypoint]: Database files initialized 09:35:32 mariadb | 2024-01-22 09:33:00+00:00 [Note] [Entrypoint]: Starting temporary server 09:35:32 mariadb | 2024-01-22 09:33:00+00:00 [Note] [Entrypoint]: Waiting for server startup 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] InnoDB: Number of transaction pools: 1 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] InnoDB: Completed initialization of buffer pool 09:35:32 mariadb | 2024-01-22 9:33:00 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Note] InnoDB: 128 rollback segments are active. 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Note] InnoDB: log sequence number 45452; transaction id 14 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Note] Plugin 'FEEDBACK' is disabled. 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 09:35:32 mariadb | 2024-01-22 9:33:01 0 [Note] mariadbd: ready for connections. 09:35:32 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 09:35:32 mariadb | 2024-01-22 09:33:01+00:00 [Note] [Entrypoint]: Temporary server started. 09:35:32 mariadb | 2024-01-22 09:33:03+00:00 [Note] [Entrypoint]: Creating user policy_user 09:35:32 mariadb | 2024-01-22 09:33:03+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 09:35:32 mariadb | 09:35:32 mariadb | 2024-01-22 09:33:03+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 09:35:32 mariadb | 09:35:32 mariadb | 2024-01-22 09:33:03+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 09:35:32 mariadb | #!/bin/bash -xv 09:35:32 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 09:35:32 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 09:35:32 mariadb | # 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.411339563Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.414129323Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.78933ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.417925723Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.418085245Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=152.442µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.421320599Z level=info msg="Executing migration" id="create quota table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.422112757Z level=info msg="Migration successfully executed" id="create quota table v1" duration=791.508µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.426940348Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.427833147Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=892.579µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.436669451Z level=info msg="Executing migration" id="Update quota table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.436747542Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=80.361µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.440552712Z level=info msg="Executing migration" id="create plugin_setting table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.441844835Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.291213ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.444965758Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.445926509Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=960.461µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.450920842Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.454019414Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.093812ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.457028126Z level=info msg="Executing migration" id="Update plugin_setting table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.457059536Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=32.33µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.46034923Z level=info msg="Executing migration" id="create session table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.461460062Z level=info msg="Migration successfully executed" id="create session table" duration=1.113122ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.466231612Z level=info msg="Executing migration" id="Drop old table playlist table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.466451195Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=219.522µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.473730362Z level=info msg="Executing migration" id="Drop old table playlist_item table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.473971894Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=246.272µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.479359251Z level=info msg="Executing migration" id="create playlist table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.480693005Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.333344ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.485567946Z level=info msg="Executing migration" id="create playlist item table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.486741898Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.174812ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.489387596Z level=info msg="Executing migration" id="Update playlist table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.489453217Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=66.331µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.494000585Z level=info msg="Executing migration" id="Update playlist_item table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.494115576Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=114.731µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.498423102Z level=info msg="Executing migration" id="Add playlist column created_at" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.501669576Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.246044ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.504760879Z level=info msg="Executing migration" id="Add playlist column updated_at" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.508590339Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.82941ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.514888345Z level=info msg="Executing migration" id="drop preferences table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.515131328Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=242.003µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.523506477Z level=info msg="Executing migration" id="drop preferences table v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.52383649Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=339.294µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.527152565Z level=info msg="Executing migration" id="create preferences table v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.52866105Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.510265ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.533058507Z level=info msg="Executing migration" id="Update preferences table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.533233819Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=174.642µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.537686696Z level=info msg="Executing migration" id="Add column team_id in preferences" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.543313015Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.627329ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.547393488Z level=info msg="Executing migration" id="Update team_id column values in preferences" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.548178176Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=790.678µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.557178962Z level=info msg="Executing migration" id="Add column week_start in preferences" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.560096342Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.91707ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.564199975Z level=info msg="Executing migration" id="Add column preferences.json_data" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.567772273Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.571758ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.570934146Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.571133789Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=177.973µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.573852416Z level=info msg="Executing migration" id="Add preferences index org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.575007049Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.148103ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.580282185Z level=info msg="Executing migration" id="Add preferences index user_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.581344176Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.061561ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.584227696Z level=info msg="Executing migration" id="create alert table v1" 09:35:32 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 09:35:32 mariadb | # you may not use this file except in compliance with the License. 09:35:32 mariadb | # You may obtain a copy of the License at 09:35:32 mariadb | # 09:35:32 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 09:35:32 mariadb | # 09:35:32 mariadb | # Unless required by applicable law or agreed to in writing, software 09:35:32 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 09:35:32 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 09:35:32 mariadb | # See the License for the specific language governing permissions and 09:35:32 mariadb | # limitations under the License. 09:35:32 mariadb | 09:35:32 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:35:32 mariadb | do 09:35:32 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 09:35:32 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 09:35:32 mariadb | done 09:35:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:35:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 09:35:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:35:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:35:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 09:35:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:35:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:35:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 09:35:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:35:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:35:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 09:35:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:35:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:35:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 09:35:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:35:32 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 09:35:32 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 09:35:32 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 09:35:32 mariadb | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.585589451Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.361625ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.588580092Z level=info msg="Executing migration" id="add index alert org_id & id " 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.59024708Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.670478ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.595385284Z level=info msg="Executing migration" id="add index alert state" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.596296694Z level=info msg="Migration successfully executed" id="add index alert state" duration=911.48µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.600910212Z level=info msg="Executing migration" id="add index alert dashboard_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.601839821Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=929.009µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.607221218Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.607981817Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=760.119µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.612269282Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.613284743Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.014711ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.616420556Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.617303945Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=883.279µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.620308047Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.633632857Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.32432ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.640422418Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.641124526Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=701.718µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.647379392Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.648316522Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=937.03µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.651305013Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.651652017Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=346.464µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.673048873Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.674122834Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.078851ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.676953103Z level=info msg="Executing migration" id="create alert_notification table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.677629681Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=675.888µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.680753794Z level=info msg="Executing migration" id="Add column is_default" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.683373022Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.619968ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.689307984Z level=info msg="Executing migration" id="Add column frequency" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.694534479Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.227955ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.701061588Z level=info msg="Executing migration" id="Add column send_reminder" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.708048791Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=6.991183ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.712981123Z level=info msg="Executing migration" id="Add column disable_resolve_message" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.71936738Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=6.385337ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.723166581Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.725328784Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=2.160933ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.72876764Z level=info msg="Executing migration" id="Update alert table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.72881264Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=46.82µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.735246178Z level=info msg="Executing migration" id="Update alert_notification table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.735298689Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=78.66µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.74399554Z level=info msg="Executing migration" id="create notification_journal table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.745279754Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.283964ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.748816361Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.750483619Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.666588ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.754578662Z level=info msg="Executing migration" id="drop alert_notification_journal" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.755431361Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=852.319µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.762397044Z level=info msg="Executing migration" id="create alert_notification_state table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.764625608Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=2.230324ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.77054662Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.771598221Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.050921ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.77431132Z level=info msg="Executing migration" id="Add for to alert table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.778072919Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.760549ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.780879519Z level=info msg="Executing migration" id="Add column uid in alert_notification" 09:35:32 kafka | metadata.max.retention.bytes = 104857600 09:35:32 kafka | metadata.max.retention.ms = 604800000 09:35:32 kafka | metric.reporters = [] 09:35:32 kafka | metrics.num.samples = 2 09:35:32 kafka | metrics.recording.level = INFO 09:35:32 kafka | metrics.sample.window.ms = 30000 09:35:32 kafka | min.insync.replicas = 1 09:35:32 kafka | node.id = 1 09:35:32 kafka | num.io.threads = 8 09:35:32 kafka | num.network.threads = 3 09:35:32 kafka | num.partitions = 1 09:35:32 kafka | num.recovery.threads.per.data.dir = 1 09:35:32 kafka | num.replica.alter.log.dirs.threads = null 09:35:32 kafka | num.replica.fetchers = 1 09:35:32 kafka | offset.metadata.max.bytes = 4096 09:35:32 kafka | offsets.commit.required.acks = -1 09:35:32 kafka | offsets.commit.timeout.ms = 5000 09:35:32 kafka | offsets.load.buffer.size = 5242880 09:35:32 kafka | offsets.retention.check.interval.ms = 600000 09:35:32 kafka | offsets.retention.minutes = 10080 09:35:32 kafka | offsets.topic.compression.codec = 0 09:35:32 kafka | offsets.topic.num.partitions = 50 09:35:32 kafka | offsets.topic.replication.factor = 1 09:35:32 kafka | offsets.topic.segment.bytes = 104857600 09:35:32 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 09:35:32 kafka | password.encoder.iterations = 4096 09:35:32 kafka | password.encoder.key.length = 128 09:35:32 kafka | password.encoder.keyfactory.algorithm = null 09:35:32 kafka | password.encoder.old.secret = null 09:35:32 kafka | password.encoder.secret = null 09:35:32 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 09:35:32 kafka | process.roles = [] 09:35:32 kafka | producer.id.expiration.check.interval.ms = 600000 09:35:32 kafka | producer.id.expiration.ms = 86400000 09:35:32 kafka | producer.purgatory.purge.interval.requests = 1000 09:35:32 kafka | queued.max.request.bytes = -1 09:35:32 kafka | queued.max.requests = 500 09:35:32 kafka | quota.window.num = 11 09:35:32 kafka | quota.window.size.seconds = 1 09:35:32 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 09:35:32 kafka | remote.log.manager.task.interval.ms = 30000 09:35:32 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 09:35:32 kafka | remote.log.manager.task.retry.backoff.ms = 500 09:35:32 kafka | remote.log.manager.task.retry.jitter = 0.2 09:35:32 kafka | remote.log.manager.thread.pool.size = 10 09:35:32 kafka | remote.log.metadata.manager.class.name = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.784524598Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.644809ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.789252357Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.78953588Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=286.363µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.79712641Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.79804166Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=914.72µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.802999342Z level=info msg="Executing migration" id="Remove unique index org_id_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.803821512Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=821.68µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.808586071Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.812273011Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.68989ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.815434944Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.815500754Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=66.2µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.818350534Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.819170973Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=820.979µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.823835822Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.824810872Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=974.5µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.82933242Z level=info msg="Executing migration" id="Drop old annotation table v4" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.829430881Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=98.211µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.83312189Z level=info msg="Executing migration" id="create annotation table v5" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.833865137Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=743.027µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.838328075Z level=info msg="Executing migration" id="add index annotation 0 v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.839281025Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=952.48µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.842164965Z level=info msg="Executing migration" id="add index annotation 1 v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.843212737Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.047542ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.846010776Z level=info msg="Executing migration" id="add index annotation 2 v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.847013686Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.00283ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.850735525Z level=info msg="Executing migration" id="add index annotation 3 v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.851788827Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.052762ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.856341374Z level=info msg="Executing migration" id="add index annotation 4 v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.857397966Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.056392ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.860198366Z level=info msg="Executing migration" id="Update annotation table charset" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.860226106Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.34µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.866221259Z level=info msg="Executing migration" id="Add column region_id to annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.87017018Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.948361ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.874362745Z level=info msg="Executing migration" id="Drop category_id index" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.875304924Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=942.159µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.878112944Z level=info msg="Executing migration" id="Add column tags to annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.882172567Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.059003ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.886562283Z level=info msg="Executing migration" id="Create annotation_tag table v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.88722984Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=666.957µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.892146442Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.893159282Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.01239ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.896260286Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.897514349Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.256443ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.90152559Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.917531519Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.005809ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.920740033Z level=info msg="Executing migration" id="Create annotation_tag table v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.92131608Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=575.387µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.926793397Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.927772888Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=979.811µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.930953841Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.931335845Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=381.454µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.934803061Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.9355991Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=796.129µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.941803606Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.942108249Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=304.973µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.951340876Z level=info msg="Executing migration" id="Add created time to annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.957769724Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.427438ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.961088209Z level=info msg="Executing migration" id="Add updated time to annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.963921679Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=2.8325ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.967070712Z level=info msg="Executing migration" id="Add index for created in annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.967726949Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=655.147µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.971836753Z level=info msg="Executing migration" id="Add index for updated in annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.972735782Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=907.91µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.975923325Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.976211348Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=287.383µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.979899717Z level=info msg="Executing migration" id="Add epoch_end column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.984031841Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.129504ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.989216926Z level=info msg="Executing migration" id="Add index for epoch_end" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.990138185Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=921.07µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.993293178Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.99351809Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=224.712µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.996543602Z level=info msg="Executing migration" id="Move region to single row" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:07.997020318Z level=info msg="Migration successfully executed" id="Move region to single row" duration=476.456µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.000554575Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.001938409Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.383794ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.007341405Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.008670409Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.328694ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.012383297Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.013802922Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.419015ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.018050596Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.01936695Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.315614ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.025611434Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.026593903Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=981.699µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.031320583Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.032813867Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.492774ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.037162452Z level=info msg="Executing migration" id="Increase tags column to length 4096" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.037331754Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=168.622µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.040786709Z level=info msg="Executing migration" id="create test_data table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.04175423Z level=info msg="Migration successfully executed" id="create test_data table" duration=966.761µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.045108134Z level=info msg="Executing migration" id="create dashboard_version table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.046238135Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.129771ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.051926904Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.053278198Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.350524ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.056518781Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.058003116Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.483115ms 09:35:32 kafka | remote.log.metadata.manager.class.path = null 09:35:32 kafka | remote.log.metadata.manager.impl.prefix = null 09:35:32 kafka | remote.log.metadata.manager.listener.name = null 09:35:32 kafka | remote.log.reader.max.pending.tasks = 100 09:35:32 kafka | remote.log.reader.threads = 10 09:35:32 kafka | remote.log.storage.manager.class.name = null 09:35:32 kafka | remote.log.storage.manager.class.path = null 09:35:32 kafka | remote.log.storage.manager.impl.prefix = null 09:35:32 kafka | remote.log.storage.system.enable = false 09:35:32 kafka | replica.fetch.backoff.ms = 1000 09:35:32 kafka | replica.fetch.max.bytes = 1048576 09:35:32 kafka | replica.fetch.min.bytes = 1 09:35:32 kafka | replica.fetch.response.max.bytes = 10485760 09:35:32 kafka | replica.fetch.wait.max.ms = 500 09:35:32 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 09:35:32 kafka | replica.lag.time.max.ms = 30000 09:35:32 kafka | replica.selector.class = null 09:35:32 kafka | replica.socket.receive.buffer.bytes = 65536 09:35:32 kafka | replica.socket.timeout.ms = 30000 09:35:32 kafka | replication.quota.window.num = 11 09:35:32 kafka | replication.quota.window.size.seconds = 1 09:35:32 kafka | request.timeout.ms = 30000 09:35:32 kafka | reserved.broker.max.id = 1000 09:35:32 kafka | sasl.client.callback.handler.class = null 09:35:32 kafka | sasl.enabled.mechanisms = [GSSAPI] 09:35:32 kafka | sasl.jaas.config = null 09:35:32 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 kafka | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 09:35:32 kafka | sasl.kerberos.service.name = null 09:35:32 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 kafka | sasl.login.callback.handler.class = null 09:35:32 kafka | sasl.login.class = null 09:35:32 kafka | sasl.login.connect.timeout.ms = null 09:35:32 kafka | sasl.login.read.timeout.ms = null 09:35:32 kafka | sasl.login.refresh.buffer.seconds = 300 09:35:32 kafka | sasl.login.refresh.min.period.seconds = 60 09:35:32 kafka | sasl.login.refresh.window.factor = 0.8 09:35:32 kafka | sasl.login.refresh.window.jitter = 0.05 09:35:32 kafka | sasl.login.retry.backoff.max.ms = 10000 09:35:32 kafka | sasl.login.retry.backoff.ms = 100 09:35:32 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 09:35:32 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 09:35:32 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 09:35:32 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 09:35:32 mariadb | 09:35:32 mariadb | 2024-01-22 09:33:04+00:00 [Note] [Entrypoint]: Stopping temporary server 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: FTS optimize thread exiting. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Starting shutdown... 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Buffer pool(s) dump completed at 240122 9:33:04 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Shutdown completed; log sequence number 380913; transaction id 298 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] mariadbd: Shutdown complete 09:35:32 mariadb | 09:35:32 mariadb | 2024-01-22 09:33:04+00:00 [Note] [Entrypoint]: Temporary server stopped 09:35:32 mariadb | 09:35:32 mariadb | 2024-01-22 09:33:04+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 09:35:32 mariadb | 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Number of transaction pools: 1 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Completed initialization of buffer pool 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: 128 rollback segments are active. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: log sequence number 380913; transaction id 299 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] Plugin 'FEEDBACK' is disabled. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] Server socket created on IP: '0.0.0.0'. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] Server socket created on IP: '::'. 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] mariadbd: ready for connections. 09:35:32 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 09:35:32 mariadb | 2024-01-22 9:33:04 0 [Note] InnoDB: Buffer pool(s) load completed at 240122 9:33:04 09:35:32 mariadb | 2024-01-22 9:33:05 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 09:35:32 mariadb | 2024-01-22 9:33:05 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 09:35:32 mariadb | 2024-01-22 9:33:05 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 09:35:32 mariadb | 2024-01-22 9:33:05 13 [Warning] Aborted connection 13 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.06129605Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.061511283Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=215.133µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.066489813Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.066831527Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=341.364µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.073211963Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.073309334Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=98.161µs 09:35:32 kafka | sasl.mechanism.controller.protocol = GSSAPI 09:35:32 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 09:35:32 kafka | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 kafka | sasl.oauthbearer.expected.audience = null 09:35:32 kafka | sasl.oauthbearer.expected.issuer = null 09:35:32 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 kafka | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 kafka | sasl.oauthbearer.scope.claim.name = scope 09:35:32 kafka | sasl.oauthbearer.sub.claim.name = sub 09:35:32 kafka | sasl.oauthbearer.token.endpoint.url = null 09:35:32 kafka | sasl.server.callback.handler.class = null 09:35:32 kafka | sasl.server.max.receive.size = 524288 09:35:32 kafka | security.inter.broker.protocol = PLAINTEXT 09:35:32 kafka | security.providers = null 09:35:32 kafka | server.max.startup.time.ms = 9223372036854775807 09:35:32 kafka | socket.connection.setup.timeout.max.ms = 30000 09:35:32 kafka | socket.connection.setup.timeout.ms = 10000 09:35:32 kafka | socket.listen.backlog.size = 50 09:35:32 kafka | socket.receive.buffer.bytes = 102400 09:35:32 kafka | socket.request.max.bytes = 104857600 09:35:32 kafka | socket.send.buffer.bytes = 102400 09:35:32 kafka | ssl.cipher.suites = [] 09:35:32 kafka | ssl.client.auth = none 09:35:32 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 kafka | ssl.endpoint.identification.algorithm = https 09:35:32 kafka | ssl.engine.factory.class = null 09:35:32 kafka | ssl.key.password = null 09:35:32 kafka | ssl.keymanager.algorithm = SunX509 09:35:32 kafka | ssl.keystore.certificate.chain = null 09:35:32 kafka | ssl.keystore.key = null 09:35:32 kafka | ssl.keystore.location = null 09:35:32 kafka | ssl.keystore.password = null 09:35:32 kafka | ssl.keystore.type = JKS 09:35:32 kafka | ssl.principal.mapping.rules = DEFAULT 09:35:32 kafka | ssl.protocol = TLSv1.3 09:35:32 kafka | ssl.provider = null 09:35:32 kafka | ssl.secure.random.implementation = null 09:35:32 kafka | ssl.trustmanager.algorithm = PKIX 09:35:32 kafka | ssl.truststore.certificates = null 09:35:32 kafka | ssl.truststore.location = null 09:35:32 kafka | ssl.truststore.password = null 09:35:32 kafka | ssl.truststore.type = JKS 09:35:32 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 09:35:32 kafka | transaction.max.timeout.ms = 900000 09:35:32 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 09:35:32 kafka | transaction.state.log.load.buffer.size = 5242880 09:35:32 kafka | transaction.state.log.min.isr = 2 09:35:32 kafka | transaction.state.log.num.partitions = 50 09:35:32 kafka | transaction.state.log.replication.factor = 3 09:35:32 kafka | transaction.state.log.segment.bytes = 104857600 09:35:32 kafka | transactional.id.expiration.ms = 604800000 09:35:32 kafka | unclean.leader.election.enable = false 09:35:32 kafka | unstable.api.versions.enable = false 09:35:32 kafka | zookeeper.clientCnxnSocket = null 09:35:32 kafka | zookeeper.connect = zookeeper:2181 09:35:32 kafka | zookeeper.connection.timeout.ms = null 09:35:32 kafka | zookeeper.max.in.flight.requests = 10 09:35:32 kafka | zookeeper.metadata.migration.enable = false 09:35:32 kafka | zookeeper.session.timeout.ms = 18000 09:35:32 kafka | zookeeper.set.acl = false 09:35:32 kafka | zookeeper.ssl.cipher.suites = null 09:35:32 kafka | zookeeper.ssl.client.enable = false 09:35:32 kafka | zookeeper.ssl.crl.enable = false 09:35:32 kafka | zookeeper.ssl.enabled.protocols = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.077045822Z level=info msg="Executing migration" id="create team table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.078480797Z level=info msg="Migration successfully executed" id="create team table" duration=1.433585ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.082530448Z level=info msg="Executing migration" id="add index team.org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.084181446Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.649838ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.088483719Z level=info msg="Executing migration" id="add unique index team_org_id_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.08947958Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=995.631µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.093109597Z level=info msg="Executing migration" id="Add column uid in team" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.100534744Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.421427ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.10510291Z level=info msg="Executing migration" id="Update uid column values in team" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.105241602Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=139.862µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.112476287Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.113993692Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.517515ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.117513918Z level=info msg="Executing migration" id="create team member table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.118767001Z level=info msg="Migration successfully executed" id="create team member table" duration=1.249413ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.1225895Z level=info msg="Executing migration" id="add index team_member.org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.12348899Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=897.33µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.127366649Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.128764364Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.396235ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.132104768Z level=info msg="Executing migration" id="add index team_member.team_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.133560563Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.454705ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.137005479Z level=info msg="Executing migration" id="Add column email to team table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.141521164Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.514225ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.144733898Z level=info msg="Executing migration" id="Add column external to team_member table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.149432896Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.696119ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.156805772Z level=info msg="Executing migration" id="Add column permission to team_member table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.164602302Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=7.79619ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.16830283Z level=info msg="Executing migration" id="create dashboard acl table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.169142499Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=840.289µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.174795917Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.175894727Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.09714ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.180141461Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.18189482Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.752259ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.185452546Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.186455336Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.00234ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.19361024Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.195121085Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.509735ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.201852075Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.20340377Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.551755ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.206905496Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.207842937Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=937.231µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.211937888Z level=info msg="Executing migration" id="add index dashboard_permission" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.212859618Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=923.15µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.216541486Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.217041451Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=501.555µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.222353925Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.222568797Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=214.552µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.226233425Z level=info msg="Executing migration" id="create tag table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.227289546Z level=info msg="Migration successfully executed" id="create tag table" duration=1.055741ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.23056499Z level=info msg="Executing migration" id="add index tag.key_value" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.231535449Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=970.029µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.237038706Z level=info msg="Executing migration" id="create login attempt table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.237702782Z level=info msg="Migration successfully executed" id="create login attempt table" duration=662.346µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.243983478Z level=info msg="Executing migration" id="add index login_attempt.username" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.244917667Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=933.009µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.248225571Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.249664326Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.438125ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.25296968Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.274014366Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=21.043796ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.277455272Z level=info msg="Executing migration" id="create login_attempt v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.278230339Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=773.177µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.288162461Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.289635937Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.473326ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.293057222Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.293520256Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=462.604µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.297680649Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.298818191Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.136712ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.302130955Z level=info msg="Executing migration" id="create user auth table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.302906894Z level=info msg="Migration successfully executed" id="create user auth table" duration=773.798µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.306832893Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.308575702Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.742029ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.316281721Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.316482873Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=201.002µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.324145931Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.332076873Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.931702ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.335654879Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.340687671Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.030532ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.345937875Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.351162349Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.223694ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.354321901Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.359410414Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.086673ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.366276944Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.367313805Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.036291ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.371450317Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.376432658Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.980281ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.379692312Z level=info msg="Executing migration" id="create server_lock table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.380449729Z level=info msg="Migration successfully executed" id="create server_lock table" duration=756.027µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.384487362Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.385400251Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=964.89µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.389444453Z level=info msg="Executing migration" id="create user auth token table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.3902316Z level=info msg="Migration successfully executed" id="create user auth token table" duration=787.067µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.39499835Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.395952049Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=953.419µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.406051693Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.40770865Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.656207ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.411895023Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.412906333Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.01163ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.416146046Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.421444931Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.298505ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.425521364Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.426509643Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=987.809µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.430605455Z level=info msg="Executing migration" id="create cache_data table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.431484815Z level=info msg="Migration successfully executed" id="create cache_data table" duration=879.2µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.434761938Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.437385345Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=2.622097ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.443994373Z level=info msg="Executing migration" id="create short_url table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.444875832Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=879.288µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.4515183Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.453138706Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.618846ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.456647823Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.456745504Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=98.971µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.46027847Z level=info msg="Executing migration" id="delete alert_definition table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.460364011Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=84.671µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.464523843Z level=info msg="Executing migration" id="recreate alert_definition table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.465335112Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=811.209µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.468590326Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.469548505Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=957.739µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.472567786Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.473491686Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=923.57µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.476629278Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.476692039Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=63.221µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.480607919Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.481974874Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.366235ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.485929964Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.486995274Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.06535ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.494641723Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.495651564Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.006581ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.499000218Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.500704365Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.704017ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.504061171Z level=info msg="Executing migration" id="Add column paused in alert_definition" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.509705098Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.644757ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.513861341Z level=info msg="Executing migration" id="drop alert_definition table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.51481331Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=951.91µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.518152895Z level=info msg="Executing migration" id="delete alert_definition_version table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.518344477Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=191.052µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.522605221Z level=info msg="Executing migration" id="recreate alert_definition_version table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.523979675Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.371225ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.528601952Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.529744834Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.142892ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.53431252Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.535614704Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.301224ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.539919118Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.54009709Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=177.572µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.542637906Z level=info msg="Executing migration" id="drop alert_definition_version table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.54394275Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.304654ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.547162483Z level=info msg="Executing migration" id="create alert_instance table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.548106362Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=942.889µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.552105304Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.553227885Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.121731ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.556848852Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.557845143Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=995.831µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.561081896Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.56737544Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.291424ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.574953358Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.575959639Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.005481ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.579170502Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.580572796Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.400624ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.586483187Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.626005133Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=39.520026ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.629349357Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.666156045Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=36.804708ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.670001615Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.671113256Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.111541ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.675450152Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.676362111Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=911.699µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.679734915Z level=info msg="Executing migration" id="add current_reason column related to current_state" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.688994471Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.260696ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.692366295Z level=info msg="Executing migration" id="create alert_rule table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.693456766Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.090991ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.698787601Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.699833431Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.04543ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.702963654Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.703943314Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=979.5µs 09:35:32 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 09:35:32 kafka | zookeeper.ssl.keystore.location = null 09:35:32 kafka | zookeeper.ssl.keystore.password = null 09:35:32 kafka | zookeeper.ssl.keystore.type = null 09:35:32 kafka | zookeeper.ssl.ocsp.enable = false 09:35:32 kafka | zookeeper.ssl.protocol = TLSv1.2 09:35:32 kafka | zookeeper.ssl.truststore.location = null 09:35:32 kafka | zookeeper.ssl.truststore.password = null 09:35:32 kafka | zookeeper.ssl.truststore.type = null 09:35:32 kafka | (kafka.server.KafkaConfig) 09:35:32 kafka | [2024-01-22 09:33:03,594] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:35:32 kafka | [2024-01-22 09:33:03,596] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:35:32 kafka | [2024-01-22 09:33:03,597] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:35:32 kafka | [2024-01-22 09:33:03,601] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 09:35:32 kafka | [2024-01-22 09:33:03,636] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:03,641] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:03,650] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:03,652] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:03,653] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:03,670] INFO Starting the log cleaner (kafka.log.LogCleaner) 09:35:32 kafka | [2024-01-22 09:33:03,727] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 09:35:32 kafka | [2024-01-22 09:33:03,746] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 09:35:32 kafka | [2024-01-22 09:33:03,763] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 09:35:32 kafka | [2024-01-22 09:33:03,803] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 09:35:32 kafka | [2024-01-22 09:33:04,144] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:35:32 kafka | [2024-01-22 09:33:04,167] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 09:35:32 kafka | [2024-01-22 09:33:04,167] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 09:35:32 kafka | [2024-01-22 09:33:04,172] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 09:35:32 kafka | [2024-01-22 09:33:04,176] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 09:35:32 kafka | [2024-01-22 09:33:04,193] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,202] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,197] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,198] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,215] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 09:35:32 kafka | [2024-01-22 09:33:04,239] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 09:35:32 kafka | [2024-01-22 09:33:04,260] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705915984250,1705915984250,1,0,0,72057608975220737,258,0,27 09:35:32 kafka | (kafka.zk.KafkaZkClient) 09:35:32 kafka | [2024-01-22 09:33:04,261] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 09:35:32 kafka | [2024-01-22 09:33:04,315] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 09:35:32 kafka | [2024-01-22 09:33:04,315] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,321] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,321] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,335] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 09:35:32 kafka | [2024-01-22 09:33:04,339] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 09:35:32 kafka | [2024-01-22 09:33:04,340] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 09:35:32 kafka | [2024-01-22 09:33:04,351] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,355] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,356] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 09:35:32 kafka | [2024-01-22 09:33:04,370] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 09:35:32 kafka | [2024-01-22 09:33:04,373] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 09:35:32 kafka | [2024-01-22 09:33:04,373] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 09:35:32 kafka | [2024-01-22 09:33:04,403] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) 09:35:32 kafka | [2024-01-22 09:33:04,403] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,412] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,415] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,417] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,422] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 09:35:32 kafka | [2024-01-22 09:33:04,447] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,451] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,457] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 09:35:32 kafka | [2024-01-22 09:33:04,462] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 09:35:32 kafka | [2024-01-22 09:33:04,479] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.707359119Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.708375699Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.01623ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.715111579Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.71521559Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=102.431µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.718500974Z level=info msg="Executing migration" id="add column for to alert_rule" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.726816439Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.315436ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.730085282Z level=info msg="Executing migration" id="add column annotations to alert_rule" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.736146295Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.060343ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.740159626Z level=info msg="Executing migration" id="add column labels to alert_rule" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.747626112Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.467836ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.750703475Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.751714455Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.01069ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.759793738Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.7610345Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.239182ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.766251534Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.774194216Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.943032ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.777312488Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.783261859Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.946291ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.786463191Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.787626714Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.103712ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.791656195Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.797738487Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.082232ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.802664548Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.808895012Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.229834ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.812267677Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.812341548Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=73.301µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.81641985Z level=info msg="Executing migration" id="create alert_rule_version table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.817371679Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=951.819µs 09:35:32 kafka | [2024-01-22 09:33:04,480] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,480] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,481] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,481] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,481] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 09:35:32 kafka | [2024-01-22 09:33:04,483] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,484] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,484] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,487] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 09:35:32 kafka | [2024-01-22 09:33:04,488] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 09:35:32 kafka | [2024-01-22 09:33:04,488] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,490] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:04,499] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 09:35:32 kafka | [2024-01-22 09:33:04,499] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 09:35:32 kafka | [2024-01-22 09:33:04,503] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 09:35:32 kafka | [2024-01-22 09:33:04,503] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 09:35:32 kafka | [2024-01-22 09:33:04,506] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 09:35:32 kafka | [2024-01-22 09:33:04,513] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 09:35:32 kafka | [2024-01-22 09:33:04,513] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 09:35:32 kafka | [2024-01-22 09:33:04,517] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 09:35:32 kafka | [2024-01-22 09:33:04,518] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,521] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) 09:35:32 kafka | [2024-01-22 09:33:04,522] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 09:35:32 kafka | [2024-01-22 09:33:04,522] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) 09:35:32 kafka | [2024-01-22 09:33:04,523] INFO Kafka startTimeMs: 1705915984512 (org.apache.kafka.common.utils.AppInfoParser) 09:35:32 kafka | [2024-01-22 09:33:04,527] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,527] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,530] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,531] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,534] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,536] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 09:35:32 kafka | [2024-01-22 09:33:04,551] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:04,612] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:04,631] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:35:32 kafka | [2024-01-22 09:33:04,680] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 09:35:32 kafka | [2024-01-22 09:33:09,555] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:09,556] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:35,646] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:35:32 kafka | [2024-01-22 09:33:35,646] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:35,653] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 09:35:32 kafka | [2024-01-22 09:33:35,655] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:35,685] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(lXAMBo1cQ9-4W8GFf4jc4w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:35,685] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 09:35:32 policy-api | Waiting for mariadb port 3306... 09:35:32 policy-api | mariadb (172.17.0.4:3306) open 09:35:32 policy-api | Waiting for policy-db-migrator port 6824... 09:35:32 policy-api | policy-db-migrator (172.17.0.7:6824) open 09:35:32 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 09:35:32 policy-api | 09:35:32 policy-api | . ____ _ __ _ _ 09:35:32 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:35:32 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:35:32 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:35:32 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 09:35:32 policy-api | =========|_|==============|___/=/_/_/_/ 09:35:32 policy-api | :: Spring Boot :: (v3.1.4) 09:35:32 policy-api | 09:35:32 policy-api | [2024-01-22T09:33:13.654+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) 09:35:32 policy-api | [2024-01-22T09:33:13.656+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 09:35:32 policy-api | [2024-01-22T09:33:15.336+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:35:32 policy-api | [2024-01-22T09:33:15.425+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 6 JPA repository interfaces. 09:35:32 policy-api | [2024-01-22T09:33:15.828+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 09:35:32 policy-api | [2024-01-22T09:33:15.828+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 09:35:32 policy-api | [2024-01-22T09:33:16.450+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 09:35:32 policy-api | [2024-01-22T09:33:16.461+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:35:32 policy-api | [2024-01-22T09:33:16.463+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:35:32 policy-api | [2024-01-22T09:33:16.464+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] 09:35:32 policy-api | [2024-01-22T09:33:16.561+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 09:35:32 policy-api | [2024-01-22T09:33:16.561+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2841 ms 09:35:32 policy-api | [2024-01-22T09:33:17.030+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:35:32 policy-api | [2024-01-22T09:33:17.098+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 09:35:32 policy-api | [2024-01-22T09:33:17.101+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 09:35:32 policy-api | [2024-01-22T09:33:17.144+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 09:35:32 policy-api | [2024-01-22T09:33:17.479+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 09:35:32 policy-api | [2024-01-22T09:33:17.502+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:35:32 policy-api | [2024-01-22T09:33:17.613+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 09:35:32 policy-api | [2024-01-22T09:33:17.616+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:35:32 policy-api | [2024-01-22T09:33:17.644+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 09:35:32 policy-api | [2024-01-22T09:33:17.646+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 09:35:32 policy-api | [2024-01-22T09:33:19.435+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 09:35:32 policy-api | [2024-01-22T09:33:19.439+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:35:32 policy-api | [2024-01-22T09:33:20.662+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 09:35:32 policy-api | [2024-01-22T09:33:21.432+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 09:35:32 policy-api | [2024-01-22T09:33:22.553+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:35:32 policy-api | [2024-01-22T09:33:22.743+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6f3a8d5e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@680f7a5e, org.springframework.security.web.context.SecurityContextHolderFilter@56d3e4a9, org.springframework.security.web.header.HeaderWriterFilter@36c6d53b, org.springframework.security.web.authentication.logout.LogoutFilter@3341ba8e, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@2f84848e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2542d320, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@66161fee, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3005133e, org.springframework.security.web.access.ExceptionTranslationFilter@69cf9acb, org.springframework.security.web.access.intercept.AuthorizationFilter@58a01e47] 09:35:32 policy-api | [2024-01-22T09:33:23.553+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 09:35:32 policy-api | [2024-01-22T09:33:23.609+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.820781615Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.821957857Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.174782ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.82520457Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.826941308Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.736168ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.831262192Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.831332423Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=70.681µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.834329563Z level=info msg="Executing migration" id="add column for to alert_rule_version" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.840958502Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.628279ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.844969583Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.851297499Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.326626ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.855510701Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.861602494Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.090973ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.864781346Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.869312783Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.530867ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.873292694Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.880249236Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.955522ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.885954454Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.886019805Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=66.061µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.89043053Z level=info msg="Executing migration" id=create_alert_configuration_table 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.891248849Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=817.959µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.89916073Z level=info msg="Executing migration" id="Add column default in alert_configuration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.907576327Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.415537ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.91081774Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.910969002Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=150.912µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.91473925Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.921216736Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.476736ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.924630522Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.925737343Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.106751ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.929116058Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.93609435Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.977923ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.944218113Z level=info msg="Executing migration" id=create_ngalert_configuration_table 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.944855869Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=636.996µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.948445586Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.950194985Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.748819ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.953729221Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.961118667Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.390116ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.965958386Z level=info msg="Executing migration" id="create provenance_type table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.966751845Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=792.969µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.9711875Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.97314198Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.95273ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.978458525Z level=info msg="Executing migration" id="create alert_image table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.979782058Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.322883ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.988130154Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.989126925Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=996.971µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.993202546Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.993307737Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=106.351µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.996961685Z level=info msg="Executing migration" id=create_alert_configuration_history_table 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:08.998646252Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.683937ms 09:35:32 policy-db-migrator | Waiting for mariadb port 3306... 09:35:32 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:35:32 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:35:32 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:35:32 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:35:32 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 09:35:32 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 09:35:32 policy-db-migrator | 321 blocks 09:35:32 policy-db-migrator | Preparing upgrade release version: 0800 09:35:32 policy-db-migrator | Preparing upgrade release version: 0900 09:35:32 policy-db-migrator | Preparing upgrade release version: 1000 09:35:32 policy-db-migrator | Preparing upgrade release version: 1100 09:35:32 policy-db-migrator | Preparing upgrade release version: 1200 09:35:32 policy-api | [2024-01-22T09:33:23.635+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 09:35:32 policy-api | [2024-01-22T09:33:23.655+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.717 seconds (process running for 11.329) 09:35:32 policy-api | [2024-01-22T09:33:39.913+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:35:32 policy-api | [2024-01-22T09:33:39.913+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 09:35:32 policy-api | [2024-01-22T09:33:39.914+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 09:35:32 policy-api | [2024-01-22T09:33:40.183+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 09:35:32 policy-api | [] 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.003388531Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.004484132Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.095821ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.008068718Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.008765656Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.012304323Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.012983349Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=678.466µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.016527555Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.017522526Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=994.861µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.023112534Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.030660801Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.548007ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.034314198Z level=info msg="Executing migration" id="create library_element table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.035301558Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=987µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.038343719Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.039442631Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.097992ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.043958117Z level=info msg="Executing migration" id="create library_element_connection table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.044731725Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=772.888µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.048782447Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.050644286Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.860609ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.055149012Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.05688796Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.739558ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.061977583Z level=info msg="Executing migration" id="increase max description length to 2048" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.062004273Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.95µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.065245276Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.065325657Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=80.041µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.071263457Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.071620681Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=356.524µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.076529372Z level=info msg="Executing migration" id="create data_keys table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.077871515Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.342213ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.081392192Z level=info msg="Executing migration" id="create secrets table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.082160059Z level=info msg="Migration successfully executed" id="create secrets table" duration=768.567µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.08520772Z level=info msg="Executing migration" id="rename data_keys name column to id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.133421035Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.211605ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.137511208Z level=info msg="Executing migration" id="add name column into data_keys" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.142475098Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.96377ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.147132046Z level=info msg="Executing migration" id="copy data_keys id column values into name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.147339008Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=205.792µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.151837065Z level=info msg="Executing migration" id="rename data_keys name column to label" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.200315822Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=48.479287ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.209570877Z level=info msg="Executing migration" id="rename data_keys id column back to name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.266095958Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=56.5172ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.271797256Z level=info msg="Executing migration" id="create kv_store table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.272460853Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=663.567µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.275547084Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.276374482Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=826.998µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.279749677Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 09:35:32 prometheus | ts=2024-01-22T09:33:05.536Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 09:35:32 prometheus | ts=2024-01-22T09:33:05.536Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 09:35:32 prometheus | ts=2024-01-22T09:33:05.536Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 09:35:32 prometheus | ts=2024-01-22T09:33:05.537Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 09:35:32 prometheus | ts=2024-01-22T09:33:05.537Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 09:35:32 prometheus | ts=2024-01-22T09:33:05.537Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 09:35:32 prometheus | ts=2024-01-22T09:33:05.540Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 09:35:32 prometheus | ts=2024-01-22T09:33:05.541Z caller=main.go:1039 level=info msg="Starting TSDB ..." 09:35:32 prometheus | ts=2024-01-22T09:33:05.542Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 09:35:32 prometheus | ts=2024-01-22T09:33:05.542Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 09:35:32 prometheus | ts=2024-01-22T09:33:05.549Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 09:35:32 prometheus | ts=2024-01-22T09:33:05.549Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=5.89µs 09:35:32 prometheus | ts=2024-01-22T09:33:05.549Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 09:35:32 prometheus | ts=2024-01-22T09:33:05.549Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 09:35:32 prometheus | ts=2024-01-22T09:33:05.549Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=66.151µs wal_replay_duration=622.616µs wbl_replay_duration=250ns total_replay_duration=751.217µs 09:35:32 prometheus | ts=2024-01-22T09:33:05.553Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 09:35:32 prometheus | ts=2024-01-22T09:33:05.553Z caller=main.go:1063 level=info msg="TSDB started" 09:35:32 prometheus | ts=2024-01-22T09:33:05.553Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 09:35:32 prometheus | ts=2024-01-22T09:33:05.555Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.566267ms db_storage=1.78µs remote_storage=2.52µs web_handler=910ns query_engine=1.53µs scrape=344.634µs scrape_sd=204.842µs notify=47.27µs notify_sd=18.89µs rules=2.34µs tracing=16.531µs 09:35:32 prometheus | ts=2024-01-22T09:33:05.555Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 09:35:32 prometheus | ts=2024-01-22T09:33:05.555Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 09:35:32 policy-apex-pdp | Waiting for mariadb port 3306... 09:35:32 policy-apex-pdp | mariadb (172.17.0.4:3306) open 09:35:32 policy-apex-pdp | Waiting for kafka port 9092... 09:35:32 policy-apex-pdp | kafka (172.17.0.6:9092) open 09:35:32 policy-apex-pdp | Waiting for pap port 6969... 09:35:32 policy-apex-pdp | pap (172.17.0.9:6969) open 09:35:32 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.527+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.680+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:35:32 policy-apex-pdp | allow.auto.create.topics = true 09:35:32 policy-apex-pdp | auto.commit.interval.ms = 5000 09:35:32 policy-apex-pdp | auto.include.jmx.reporter = true 09:35:32 policy-apex-pdp | auto.offset.reset = latest 09:35:32 policy-apex-pdp | bootstrap.servers = [kafka:9092] 09:35:32 policy-apex-pdp | check.crcs = true 09:35:32 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 09:35:32 policy-apex-pdp | client.id = consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-1 09:35:32 policy-apex-pdp | client.rack = 09:35:32 policy-apex-pdp | connections.max.idle.ms = 540000 09:35:32 policy-apex-pdp | default.api.timeout.ms = 60000 09:35:32 policy-apex-pdp | enable.auto.commit = true 09:35:32 policy-apex-pdp | exclude.internal.topics = true 09:35:32 policy-apex-pdp | fetch.max.bytes = 52428800 09:35:32 policy-apex-pdp | fetch.max.wait.ms = 500 09:35:32 policy-apex-pdp | fetch.min.bytes = 1 09:35:32 policy-apex-pdp | group.id = 864903c6-6b2d-49e1-b529-b1863a334e8b 09:35:32 policy-apex-pdp | group.instance.id = null 09:35:32 policy-apex-pdp | heartbeat.interval.ms = 3000 09:35:32 policy-apex-pdp | interceptor.classes = [] 09:35:32 policy-apex-pdp | internal.leave.group.on.close = true 09:35:32 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 09:35:32 policy-apex-pdp | isolation.level = read_uncommitted 09:35:32 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-apex-pdp | max.partition.fetch.bytes = 1048576 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.280668237Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=918.12µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.285164052Z level=info msg="Executing migration" id="create permission table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.286330855Z level=info msg="Migration successfully executed" id="create permission table" duration=1.169623ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.291729051Z level=info msg="Executing migration" id="add unique index permission.role_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.293777061Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.04951ms 09:35:32 kafka | [2024-01-22 09:33:35,689] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,690] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,695] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,734] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,741] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,743] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,749] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,750] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,750] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,755] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,755] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,765] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(DwQp2N8YQFWy2VDhXLnyoQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:35,767] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 09:35:32 kafka | [2024-01-22 09:33:35,768] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,773] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,773] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,774] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,775] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,776] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,776] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,776] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,776] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,784] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,784] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,784] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,784] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,784] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,784] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,785] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,785] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 policy-pap | Waiting for mariadb port 3306... 09:35:32 policy-pap | mariadb (172.17.0.4:3306) open 09:35:32 policy-pap | Waiting for kafka port 9092... 09:35:32 policy-pap | kafka (172.17.0.6:9092) open 09:35:32 policy-pap | Waiting for api port 6969... 09:35:32 policy-pap | api (172.17.0.8:6969) open 09:35:32 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 09:35:32 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 09:35:32 policy-pap | 09:35:32 policy-pap | . ____ _ __ _ _ 09:35:32 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 09:35:32 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 09:35:32 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 09:35:32 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 09:35:32 policy-pap | =========|_|==============|___/=/_/_/_/ 09:35:32 policy-pap | :: Spring Boot :: (v3.1.4) 09:35:32 policy-pap | 09:35:32 policy-pap | [2024-01-22T09:33:25.910+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 09:35:32 policy-pap | [2024-01-22T09:33:25.912+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 09:35:32 policy-pap | [2024-01-22T09:33:27.715+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 09:35:32 policy-pap | [2024-01-22T09:33:27.829+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 105 ms. Found 7 JPA repository interfaces. 09:35:32 policy-pap | [2024-01-22T09:33:28.227+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 09:35:32 policy-pap | [2024-01-22T09:33:28.228+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 09:35:32 policy-pap | [2024-01-22T09:33:28.892+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 09:35:32 policy-pap | [2024-01-22T09:33:28.900+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 09:35:32 policy-pap | [2024-01-22T09:33:28.903+00:00|INFO|StandardService|main] Starting service [Tomcat] 09:35:32 policy-pap | [2024-01-22T09:33:28.903+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] 09:35:32 policy-pap | [2024-01-22T09:33:28.999+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 09:35:32 policy-pap | [2024-01-22T09:33:29.000+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3016 ms 09:35:32 policy-pap | [2024-01-22T09:33:29.442+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 09:35:32 policy-pap | [2024-01-22T09:33:29.517+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 09:35:32 policy-pap | [2024-01-22T09:33:29.520+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 09:35:32 policy-pap | [2024-01-22T09:33:29.571+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 09:35:32 policy-pap | [2024-01-22T09:33:29.910+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 09:35:32 policy-pap | [2024-01-22T09:33:29.930+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 09:35:32 policy-pap | [2024-01-22T09:33:30.042+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2b03d52f 09:35:32 policy-pap | [2024-01-22T09:33:30.044+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 09:35:32 policy-pap | [2024-01-22T09:33:30.075+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 09:35:32 policy-pap | [2024-01-22T09:33:30.077+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 09:35:32 policy-pap | [2024-01-22T09:33:31.901+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 09:35:32 policy-pap | [2024-01-22T09:33:31.904+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 09:35:32 policy-pap | [2024-01-22T09:33:32.442+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 09:35:32 policy-pap | [2024-01-22T09:33:32.988+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 09:35:32 policy-pap | [2024-01-22T09:33:33.076+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 09:35:32 policy-pap | [2024-01-22T09:33:33.343+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:35:32 policy-pap | allow.auto.create.topics = true 09:35:32 policy-pap | auto.commit.interval.ms = 5000 09:35:32 policy-pap | auto.include.jmx.reporter = true 09:35:32 policy-pap | auto.offset.reset = latest 09:35:32 policy-pap | bootstrap.servers = [kafka:9092] 09:35:32 policy-pap | check.crcs = true 09:35:32 policy-pap | client.dns.lookup = use_all_dns_ips 09:35:32 policy-pap | client.id = consumer-d445f4a2-e058-4282-8e5c-a34015c30918-1 09:35:32 policy-pap | client.rack = 09:35:32 policy-pap | connections.max.idle.ms = 540000 09:35:32 policy-pap | default.api.timeout.ms = 60000 09:35:32 policy-pap | enable.auto.commit = true 09:35:32 policy-pap | exclude.internal.topics = true 09:35:32 policy-pap | fetch.max.bytes = 52428800 09:35:32 policy-pap | fetch.max.wait.ms = 500 09:35:32 policy-pap | fetch.min.bytes = 1 09:35:32 policy-pap | group.id = d445f4a2-e058-4282-8e5c-a34015c30918 09:35:32 policy-pap | group.instance.id = null 09:35:32 policy-pap | heartbeat.interval.ms = 3000 09:35:32 policy-pap | interceptor.classes = [] 09:35:32 policy-pap | internal.leave.group.on.close = true 09:35:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:35:32 policy-pap | isolation.level = read_uncommitted 09:35:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-pap | max.partition.fetch.bytes = 1048576 09:35:32 policy-apex-pdp | max.poll.interval.ms = 300000 09:35:32 policy-apex-pdp | max.poll.records = 500 09:35:32 policy-apex-pdp | metadata.max.age.ms = 300000 09:35:32 policy-apex-pdp | metric.reporters = [] 09:35:32 policy-apex-pdp | metrics.num.samples = 2 09:35:32 policy-apex-pdp | metrics.recording.level = INFO 09:35:32 policy-apex-pdp | metrics.sample.window.ms = 30000 09:35:32 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:35:32 policy-apex-pdp | receive.buffer.bytes = 65536 09:35:32 policy-apex-pdp | reconnect.backoff.max.ms = 1000 09:35:32 policy-apex-pdp | reconnect.backoff.ms = 50 09:35:32 policy-apex-pdp | request.timeout.ms = 30000 09:35:32 policy-apex-pdp | retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.client.callback.handler.class = null 09:35:32 policy-apex-pdp | sasl.jaas.config = null 09:35:32 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-apex-pdp | sasl.kerberos.service.name = null 09:35:32 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-apex-pdp | sasl.login.callback.handler.class = null 09:35:32 policy-apex-pdp | sasl.login.class = null 09:35:32 policy-apex-pdp | sasl.login.connect.timeout.ms = null 09:35:32 policy-apex-pdp | sasl.login.read.timeout.ms = null 09:35:32 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 09:35:32 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.mechanism = GSSAPI 09:35:32 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 09:35:32 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 09:35:32 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 09:35:32 policy-apex-pdp | security.protocol = PLAINTEXT 09:35:32 policy-apex-pdp | security.providers = null 09:35:32 policy-apex-pdp | send.buffer.bytes = 131072 09:35:32 policy-apex-pdp | session.timeout.ms = 45000 09:35:32 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 09:35:32 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 09:35:32 policy-apex-pdp | ssl.cipher.suites = null 09:35:32 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 09:35:32 policy-apex-pdp | ssl.engine.factory.class = null 09:35:32 policy-apex-pdp | ssl.key.password = null 09:35:32 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 09:35:32 policy-apex-pdp | ssl.keystore.certificate.chain = null 09:35:32 policy-pap | max.poll.interval.ms = 300000 09:35:32 policy-pap | max.poll.records = 500 09:35:32 policy-pap | metadata.max.age.ms = 300000 09:35:32 policy-pap | metric.reporters = [] 09:35:32 policy-pap | metrics.num.samples = 2 09:35:32 policy-pap | metrics.recording.level = INFO 09:35:32 policy-pap | metrics.sample.window.ms = 30000 09:35:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:35:32 policy-pap | receive.buffer.bytes = 65536 09:35:32 policy-pap | reconnect.backoff.max.ms = 1000 09:35:32 policy-pap | reconnect.backoff.ms = 50 09:35:32 policy-pap | request.timeout.ms = 30000 09:35:32 policy-pap | retry.backoff.ms = 100 09:35:32 policy-pap | sasl.client.callback.handler.class = null 09:35:32 policy-pap | sasl.jaas.config = null 09:35:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-pap | sasl.kerberos.service.name = null 09:35:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-pap | sasl.login.callback.handler.class = null 09:35:32 policy-pap | sasl.login.class = null 09:35:32 policy-pap | sasl.login.connect.timeout.ms = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.297407619Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.298622871Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.215052ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.302844365Z level=info msg="Executing migration" id="create role table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.303802394Z level=info msg="Migration successfully executed" id="create role table" duration=957.459µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.309050788Z level=info msg="Executing migration" id="add column display_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.320809868Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.76012ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.3268411Z level=info msg="Executing migration" id="add column group_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.332813772Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.971452ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.337578091Z level=info msg="Executing migration" id="add index role.org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.338737523Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.156482ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.341970606Z level=info msg="Executing migration" id="add unique index role_org_id_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.343131428Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.160472ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.347301641Z level=info msg="Executing migration" id="add index role_org_id_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.348533603Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.227982ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.353191431Z level=info msg="Executing migration" id="create team role table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.354513514Z level=info msg="Migration successfully executed" id="create team role table" duration=1.321573ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.357761288Z level=info msg="Executing migration" id="add index team_role.org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.35895777Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.196453ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.363110362Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.364397736Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.287174ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.368606679Z level=info msg="Executing migration" id="add index team_role.team_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.370073045Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.468096ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.375586381Z level=info msg="Executing migration" id="create user role table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.376712822Z level=info msg="Migration successfully executed" id="create user role table" duration=1.125661ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.382549192Z level=info msg="Executing migration" id="add index user_role.org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.384175749Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.625637ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.389175481Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.390981279Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.804868ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.397291483Z level=info msg="Executing migration" id="add index user_role.user_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.398720868Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.428745ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.405227685Z level=info msg="Executing migration" id="create builtin role table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.406513808Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.287453ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.409716901Z level=info msg="Executing migration" id="add index builtin_role.role_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.410926233Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.205822ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.414318528Z level=info msg="Executing migration" id="add index builtin_role.name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.416126967Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.798219ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.419859765Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.425865847Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.004992ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.42911583Z level=info msg="Executing migration" id="add index builtin_role.org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.430229451Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.113071ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.434530526Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.435563206Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.03229ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.438700848Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 09:35:32 policy-pap | sasl.login.read.timeout.ms = null 09:35:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:35:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-pap | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-pap | sasl.login.retry.backoff.ms = 100 09:35:32 policy-pap | sasl.mechanism = GSSAPI 09:35:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-pap | sasl.oauthbearer.expected.audience = null 09:35:32 policy-pap | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:35:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:35:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:35:32 policy-pap | security.protocol = PLAINTEXT 09:35:32 policy-pap | security.providers = null 09:35:32 policy-pap | send.buffer.bytes = 131072 09:35:32 policy-pap | session.timeout.ms = 45000 09:35:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:35:32 policy-pap | socket.connection.setup.timeout.ms = 10000 09:35:32 policy-pap | ssl.cipher.suites = null 09:35:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-pap | ssl.endpoint.identification.algorithm = https 09:35:32 policy-pap | ssl.engine.factory.class = null 09:35:32 policy-pap | ssl.key.password = null 09:35:32 policy-pap | ssl.keymanager.algorithm = SunX509 09:35:32 policy-pap | ssl.keystore.certificate.chain = null 09:35:32 policy-pap | ssl.keystore.key = null 09:35:32 policy-pap | ssl.keystore.location = null 09:35:32 policy-pap | ssl.keystore.password = null 09:35:32 policy-pap | ssl.keystore.type = JKS 09:35:32 policy-pap | ssl.protocol = TLSv1.3 09:35:32 policy-pap | ssl.provider = null 09:35:32 policy-pap | ssl.secure.random.implementation = null 09:35:32 policy-pap | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-pap | ssl.truststore.certificates = null 09:35:32 policy-pap | ssl.truststore.location = null 09:35:32 policy-pap | ssl.truststore.password = null 09:35:32 policy-pap | ssl.truststore.type = JKS 09:35:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.439713679Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.014921ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.442885132Z level=info msg="Executing migration" id="add unique index role.uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.443937212Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.05179ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.448838843Z level=info msg="Executing migration" id="create seed assignment table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.450139156Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.299783ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.453331649Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.455510051Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=2.178333ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.458639513Z level=info msg="Executing migration" id="add column hidden to role table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.466728146Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.088183ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.473382895Z level=info msg="Executing migration" id="permission kind migration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.485065865Z level=info msg="Migration successfully executed" id="permission kind migration" duration=11.682409ms 09:35:32 policy-apex-pdp | ssl.keystore.key = null 09:35:32 policy-apex-pdp | ssl.keystore.location = null 09:35:32 policy-apex-pdp | ssl.keystore.password = null 09:35:32 policy-apex-pdp | ssl.keystore.type = JKS 09:35:32 policy-apex-pdp | ssl.protocol = TLSv1.3 09:35:32 policy-apex-pdp | ssl.provider = null 09:35:32 policy-apex-pdp | ssl.secure.random.implementation = null 09:35:32 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-apex-pdp | ssl.truststore.certificates = null 09:35:32 policy-apex-pdp | ssl.truststore.location = null 09:35:32 policy-apex-pdp | ssl.truststore.password = null 09:35:32 policy-apex-pdp | ssl.truststore.type = JKS 09:35:32 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-apex-pdp | 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.815+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.815+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.815+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916016814 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.817+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-1, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Subscribed to topic(s): policy-pdp-pap 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.829+00:00|INFO|ServiceManager|main] service manager starting 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.829+00:00|INFO|ServiceManager|main] service manager starting topics 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.835+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=864903c6-6b2d-49e1-b529-b1863a334e8b, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.858+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:35:32 policy-apex-pdp | allow.auto.create.topics = true 09:35:32 policy-apex-pdp | auto.commit.interval.ms = 5000 09:35:32 policy-apex-pdp | auto.include.jmx.reporter = true 09:35:32 policy-apex-pdp | auto.offset.reset = latest 09:35:32 policy-apex-pdp | bootstrap.servers = [kafka:9092] 09:35:32 policy-apex-pdp | check.crcs = true 09:35:32 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 09:35:32 policy-apex-pdp | client.id = consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2 09:35:32 policy-apex-pdp | client.rack = 09:35:32 policy-pap | 09:35:32 policy-pap | [2024-01-22T09:33:33.508+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-pap | [2024-01-22T09:33:33.508+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 policy-pap | [2024-01-22T09:33:33.508+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916013506 09:35:32 policy-pap | [2024-01-22T09:33:33.510+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-1, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Subscribed to topic(s): policy-pdp-pap 09:35:32 policy-pap | [2024-01-22T09:33:33.511+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:35:32 policy-pap | allow.auto.create.topics = true 09:35:32 policy-pap | auto.commit.interval.ms = 5000 09:35:32 policy-pap | auto.include.jmx.reporter = true 09:35:32 policy-pap | auto.offset.reset = latest 09:35:32 policy-pap | bootstrap.servers = [kafka:9092] 09:35:32 policy-pap | check.crcs = true 09:35:32 policy-pap | client.dns.lookup = use_all_dns_ips 09:35:32 policy-pap | client.id = consumer-policy-pap-2 09:35:32 policy-pap | client.rack = 09:35:32 policy-pap | connections.max.idle.ms = 540000 09:35:32 policy-pap | default.api.timeout.ms = 60000 09:35:32 policy-pap | enable.auto.commit = true 09:35:32 policy-pap | exclude.internal.topics = true 09:35:32 policy-pap | fetch.max.bytes = 52428800 09:35:32 policy-pap | fetch.max.wait.ms = 500 09:35:32 policy-pap | fetch.min.bytes = 1 09:35:32 policy-pap | group.id = policy-pap 09:35:32 policy-pap | group.instance.id = null 09:35:32 policy-pap | heartbeat.interval.ms = 3000 09:35:32 policy-pap | interceptor.classes = [] 09:35:32 policy-pap | internal.leave.group.on.close = true 09:35:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:35:32 policy-pap | isolation.level = read_uncommitted 09:35:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-pap | max.partition.fetch.bytes = 1048576 09:35:32 policy-pap | max.poll.interval.ms = 300000 09:35:32 policy-pap | max.poll.records = 500 09:35:32 policy-pap | metadata.max.age.ms = 300000 09:35:32 policy-pap | metric.reporters = [] 09:35:32 policy-pap | metrics.num.samples = 2 09:35:32 policy-pap | metrics.recording.level = INFO 09:35:32 policy-pap | metrics.sample.window.ms = 30000 09:35:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:35:32 policy-pap | receive.buffer.bytes = 65536 09:35:32 policy-pap | reconnect.backoff.max.ms = 1000 09:35:32 policy-pap | reconnect.backoff.ms = 50 09:35:32 policy-pap | request.timeout.ms = 30000 09:35:32 policy-pap | retry.backoff.ms = 100 09:35:32 policy-pap | sasl.client.callback.handler.class = null 09:35:32 policy-pap | sasl.jaas.config = null 09:35:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-pap | sasl.kerberos.service.name = null 09:35:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-pap | sasl.login.callback.handler.class = null 09:35:32 policy-pap | sasl.login.class = null 09:35:32 policy-pap | sasl.login.connect.timeout.ms = null 09:35:32 policy-pap | sasl.login.read.timeout.ms = null 09:35:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:35:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-pap | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-pap | sasl.login.retry.backoff.ms = 100 09:35:32 policy-pap | sasl.mechanism = GSSAPI 09:35:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-pap | sasl.oauthbearer.expected.audience = null 09:35:32 policy-pap | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:35:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:35:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:35:32 policy-pap | security.protocol = PLAINTEXT 09:35:32 policy-apex-pdp | connections.max.idle.ms = 540000 09:35:32 policy-apex-pdp | default.api.timeout.ms = 60000 09:35:32 policy-apex-pdp | enable.auto.commit = true 09:35:32 policy-apex-pdp | exclude.internal.topics = true 09:35:32 policy-apex-pdp | fetch.max.bytes = 52428800 09:35:32 policy-apex-pdp | fetch.max.wait.ms = 500 09:35:32 policy-apex-pdp | fetch.min.bytes = 1 09:35:32 policy-apex-pdp | group.id = 864903c6-6b2d-49e1-b529-b1863a334e8b 09:35:32 policy-apex-pdp | group.instance.id = null 09:35:32 policy-apex-pdp | heartbeat.interval.ms = 3000 09:35:32 policy-apex-pdp | interceptor.classes = [] 09:35:32 policy-apex-pdp | internal.leave.group.on.close = true 09:35:32 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 09:35:32 policy-apex-pdp | isolation.level = read_uncommitted 09:35:32 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-apex-pdp | max.partition.fetch.bytes = 1048576 09:35:32 policy-apex-pdp | max.poll.interval.ms = 300000 09:35:32 policy-apex-pdp | max.poll.records = 500 09:35:32 policy-apex-pdp | metadata.max.age.ms = 300000 09:35:32 policy-apex-pdp | metric.reporters = [] 09:35:32 policy-apex-pdp | metrics.num.samples = 2 09:35:32 policy-apex-pdp | metrics.recording.level = INFO 09:35:32 policy-apex-pdp | metrics.sample.window.ms = 30000 09:35:32 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:35:32 policy-apex-pdp | receive.buffer.bytes = 65536 09:35:32 policy-apex-pdp | reconnect.backoff.max.ms = 1000 09:35:32 policy-apex-pdp | reconnect.backoff.ms = 50 09:35:32 policy-apex-pdp | request.timeout.ms = 30000 09:35:32 policy-apex-pdp | retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.client.callback.handler.class = null 09:35:32 policy-apex-pdp | sasl.jaas.config = null 09:35:32 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-apex-pdp | sasl.kerberos.service.name = null 09:35:32 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-apex-pdp | sasl.login.callback.handler.class = null 09:35:32 policy-apex-pdp | sasl.login.class = null 09:35:32 policy-apex-pdp | sasl.login.connect.timeout.ms = null 09:35:32 policy-apex-pdp | sasl.login.read.timeout.ms = null 09:35:32 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 09:35:32 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.mechanism = GSSAPI 09:35:32 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 09:35:32 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 09:35:32 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 09:35:32 policy-apex-pdp | security.protocol = PLAINTEXT 09:35:32 policy-apex-pdp | security.providers = null 09:35:32 policy-apex-pdp | send.buffer.bytes = 131072 09:35:32 policy-apex-pdp | session.timeout.ms = 45000 09:35:32 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 09:35:32 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 09:35:32 policy-apex-pdp | ssl.cipher.suites = null 09:35:32 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 09:35:32 policy-apex-pdp | ssl.engine.factory.class = null 09:35:32 policy-apex-pdp | ssl.key.password = null 09:35:32 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 09:35:32 policy-apex-pdp | ssl.keystore.certificate.chain = null 09:35:32 policy-apex-pdp | ssl.keystore.key = null 09:35:32 policy-apex-pdp | ssl.keystore.location = null 09:35:32 policy-apex-pdp | ssl.keystore.password = null 09:35:32 policy-apex-pdp | ssl.keystore.type = JKS 09:35:32 policy-apex-pdp | ssl.protocol = TLSv1.3 09:35:32 policy-apex-pdp | ssl.provider = null 09:35:32 policy-apex-pdp | ssl.secure.random.implementation = null 09:35:32 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-apex-pdp | ssl.truststore.certificates = null 09:35:32 policy-apex-pdp | ssl.truststore.location = null 09:35:32 policy-apex-pdp | ssl.truststore.password = null 09:35:32 policy-apex-pdp | ssl.truststore.type = JKS 09:35:32 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-apex-pdp | 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.866+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.866+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.866+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916016866 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.866+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Subscribed to topic(s): policy-pdp-pap 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.868+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6096c552-cc78-43e8-aa6c-3e1c0468a918, alive=false, publisher=null]]: starting 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.878+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:35:32 policy-apex-pdp | acks = -1 09:35:32 policy-apex-pdp | auto.include.jmx.reporter = true 09:35:32 policy-apex-pdp | batch.size = 16384 09:35:32 policy-apex-pdp | bootstrap.servers = [kafka:9092] 09:35:32 policy-apex-pdp | buffer.memory = 33554432 09:35:32 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 09:35:32 policy-apex-pdp | client.id = producer-1 09:35:32 policy-apex-pdp | compression.type = none 09:35:32 policy-apex-pdp | connections.max.idle.ms = 540000 09:35:32 policy-apex-pdp | delivery.timeout.ms = 120000 09:35:32 policy-apex-pdp | enable.idempotence = true 09:35:32 policy-apex-pdp | interceptor.classes = [] 09:35:32 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:35:32 policy-apex-pdp | linger.ms = 0 09:35:32 policy-apex-pdp | max.block.ms = 60000 09:35:32 policy-apex-pdp | max.in.flight.requests.per.connection = 5 09:35:32 policy-apex-pdp | max.request.size = 1048576 09:35:32 policy-apex-pdp | metadata.max.age.ms = 300000 09:35:32 policy-apex-pdp | metadata.max.idle.ms = 300000 09:35:32 policy-apex-pdp | metric.reporters = [] 09:35:32 policy-apex-pdp | metrics.num.samples = 2 09:35:32 policy-apex-pdp | metrics.recording.level = INFO 09:35:32 policy-apex-pdp | metrics.sample.window.ms = 30000 09:35:32 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 09:35:32 policy-apex-pdp | partitioner.availability.timeout.ms = 0 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.488340748Z level=info msg="Executing migration" id="permission attribute migration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.494872265Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.530337ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.499783836Z level=info msg="Executing migration" id="permission identifier migration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.505595375Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.812129ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.508915679Z level=info msg="Executing migration" id="add permission identifier index" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.50995445Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.038981ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.515402496Z level=info msg="Executing migration" id="create query_history table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.517478417Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=2.060211ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.52167676Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.523672341Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.994821ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.529000595Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.529088896Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=89.621µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.533205399Z level=info msg="Executing migration" id="rbac disabled migrator" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.533263419Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=59.68µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.537655485Z level=info msg="Executing migration" id="teams permissions migration" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.538355392Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=714.837µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.54601654Z level=info msg="Executing migration" id="dashboard permissions" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.547226263Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.210792ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.552634418Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.553611598Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=977.01µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.562973624Z level=info msg="Executing migration" id="drop managed folder create actions" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.563178926Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=205.842µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.566231718Z level=info msg="Executing migration" id="alerting notification permissions" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.566742163Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=544.396µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.570458331Z level=info msg="Executing migration" id="create query_history_star table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.571690174Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.227383ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.576675604Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.578748216Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.072032ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.58501608Z level=info msg="Executing migration" id="add column org_id in query_history_star" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.594825081Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.806941ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.597731731Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.597786731Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=55.34µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.60055063Z level=info msg="Executing migration" id="create correlation table v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.601305607Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=752.447µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.608910146Z level=info msg="Executing migration" id="add index correlations.uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.610251949Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.341123ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.617835997Z level=info msg="Executing migration" id="add index correlations.source_uid" 09:35:32 policy-apex-pdp | partitioner.class = null 09:35:32 policy-apex-pdp | partitioner.ignore.keys = false 09:35:32 policy-apex-pdp | receive.buffer.bytes = 32768 09:35:32 policy-apex-pdp | reconnect.backoff.max.ms = 1000 09:35:32 policy-apex-pdp | reconnect.backoff.ms = 50 09:35:32 policy-apex-pdp | request.timeout.ms = 30000 09:35:32 policy-apex-pdp | retries = 2147483647 09:35:32 policy-apex-pdp | retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.client.callback.handler.class = null 09:35:32 policy-apex-pdp | sasl.jaas.config = null 09:35:32 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-apex-pdp | sasl.kerberos.service.name = null 09:35:32 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-apex-pdp | sasl.login.callback.handler.class = null 09:35:32 policy-apex-pdp | sasl.login.class = null 09:35:32 policy-apex-pdp | sasl.login.connect.timeout.ms = null 09:35:32 policy-apex-pdp | sasl.login.read.timeout.ms = null 09:35:32 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 09:35:32 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.mechanism = GSSAPI 09:35:32 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 09:35:32 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 09:35:32 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 09:35:32 policy-pap | security.providers = null 09:35:32 policy-pap | send.buffer.bytes = 131072 09:35:32 policy-pap | session.timeout.ms = 45000 09:35:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:35:32 policy-pap | socket.connection.setup.timeout.ms = 10000 09:35:32 policy-pap | ssl.cipher.suites = null 09:35:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-pap | ssl.endpoint.identification.algorithm = https 09:35:32 policy-pap | ssl.engine.factory.class = null 09:35:32 policy-pap | ssl.key.password = null 09:35:32 policy-pap | ssl.keymanager.algorithm = SunX509 09:35:32 policy-pap | ssl.keystore.certificate.chain = null 09:35:32 policy-pap | ssl.keystore.key = null 09:35:32 policy-pap | ssl.keystore.location = null 09:35:32 policy-pap | ssl.keystore.password = null 09:35:32 policy-pap | ssl.keystore.type = JKS 09:35:32 policy-pap | ssl.protocol = TLSv1.3 09:35:32 policy-pap | ssl.provider = null 09:35:32 policy-pap | ssl.secure.random.implementation = null 09:35:32 policy-pap | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-pap | ssl.truststore.certificates = null 09:35:32 policy-pap | ssl.truststore.location = null 09:35:32 policy-pap | ssl.truststore.password = null 09:35:32 policy-pap | ssl.truststore.type = JKS 09:35:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-pap | 09:35:32 policy-pap | [2024-01-22T09:33:33.517+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-pap | [2024-01-22T09:33:33.517+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 policy-pap | [2024-01-22T09:33:33.517+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916013517 09:35:32 policy-pap | [2024-01-22T09:33:33.517+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:35:32 policy-apex-pdp | security.protocol = PLAINTEXT 09:35:32 policy-apex-pdp | security.providers = null 09:35:32 policy-apex-pdp | send.buffer.bytes = 131072 09:35:32 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 09:35:32 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 09:35:32 policy-apex-pdp | ssl.cipher.suites = null 09:35:32 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 09:35:32 policy-apex-pdp | ssl.engine.factory.class = null 09:35:32 policy-apex-pdp | ssl.key.password = null 09:35:32 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 09:35:32 policy-apex-pdp | ssl.keystore.certificate.chain = null 09:35:32 policy-apex-pdp | ssl.keystore.key = null 09:35:32 policy-apex-pdp | ssl.keystore.location = null 09:35:32 policy-apex-pdp | ssl.keystore.password = null 09:35:32 policy-apex-pdp | ssl.keystore.type = JKS 09:35:32 policy-apex-pdp | ssl.protocol = TLSv1.3 09:35:32 policy-apex-pdp | ssl.provider = null 09:35:32 policy-apex-pdp | ssl.secure.random.implementation = null 09:35:32 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-apex-pdp | ssl.truststore.certificates = null 09:35:32 policy-apex-pdp | ssl.truststore.location = null 09:35:32 policy-apex-pdp | ssl.truststore.password = null 09:35:32 policy-apex-pdp | ssl.truststore.type = JKS 09:35:32 policy-apex-pdp | transaction.timeout.ms = 60000 09:35:32 policy-apex-pdp | transactional.id = null 09:35:32 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:35:32 policy-apex-pdp | 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.886+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.905+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.905+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.905+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916016905 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.905+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6096c552-cc78-43e8-aa6c-3e1c0468a918, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.905+00:00|INFO|ServiceManager|main] service manager starting set alive 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.905+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.907+00:00|INFO|ServiceManager|main] service manager starting topic sinks 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.907+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.910+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.910+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.910+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.910+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=864903c6-6b2d-49e1-b529-b1863a334e8b, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.910+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=864903c6-6b2d-49e1-b529-b1863a334e8b, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.910+00:00|INFO|ServiceManager|main] service manager starting Create REST server 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.932+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 09:35:32 policy-apex-pdp | [] 09:35:32 policy-apex-pdp | [2024-01-22T09:33:36.934+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"cbd08a14-b590-4688-a538-f09d7c63379d","timestampMs":1705916016915,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.052+00:00|INFO|ServiceManager|main] service manager starting Rest Server 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.052+00:00|INFO|ServiceManager|main] service manager starting 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.052+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.052+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.075+00:00|INFO|ServiceManager|main] service manager started 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.075+00:00|INFO|ServiceManager|main] service manager started 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.075+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 09:35:32 policy-pap | [2024-01-22T09:33:33.819+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 09:35:32 policy-pap | [2024-01-22T09:33:33.956+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 09:35:32 policy-pap | [2024-01-22T09:33:34.177+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6f2bf657, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@27d6467, org.springframework.security.web.context.SecurityContextHolderFilter@53564a4c, org.springframework.security.web.header.HeaderWriterFilter@1734b1a, org.springframework.security.web.authentication.logout.LogoutFilter@4fbbd98c, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@280c3dc0, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@69a294d8, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@11d422fd, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5bf1b528, org.springframework.security.web.access.ExceptionTranslationFilter@1f013047, org.springframework.security.web.access.intercept.AuthorizationFilter@50f13494] 09:35:32 policy-pap | [2024-01-22T09:33:35.012+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 09:35:32 policy-pap | [2024-01-22T09:33:35.072+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 09:35:32 policy-pap | [2024-01-22T09:33:35.093+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 09:35:32 policy-pap | [2024-01-22T09:33:35.111+00:00|INFO|ServiceManager|main] Policy PAP starting 09:35:32 policy-pap | [2024-01-22T09:33:35.111+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 09:35:32 policy-pap | [2024-01-22T09:33:35.112+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 09:35:32 policy-pap | [2024-01-22T09:33:35.112+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 09:35:32 policy-pap | [2024-01-22T09:33:35.112+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 09:35:32 policy-pap | [2024-01-22T09:33:35.113+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 09:35:32 policy-pap | [2024-01-22T09:33:35.113+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 09:35:32 policy-pap | [2024-01-22T09:33:35.118+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d445f4a2-e058-4282-8e5c-a34015c30918, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60f1f95b 09:35:32 policy-pap | [2024-01-22T09:33:35.129+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d445f4a2-e058-4282-8e5c-a34015c30918, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:35:32 policy-pap | [2024-01-22T09:33:35.129+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:35:32 policy-pap | allow.auto.create.topics = true 09:35:32 policy-pap | auto.commit.interval.ms = 5000 09:35:32 policy-pap | auto.include.jmx.reporter = true 09:35:32 policy-pap | auto.offset.reset = latest 09:35:32 policy-pap | bootstrap.servers = [kafka:9092] 09:35:32 policy-pap | check.crcs = true 09:35:32 policy-pap | client.dns.lookup = use_all_dns_ips 09:35:32 policy-pap | client.id = consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3 09:35:32 policy-pap | client.rack = 09:35:32 policy-pap | connections.max.idle.ms = 540000 09:35:32 policy-pap | default.api.timeout.ms = 60000 09:35:32 policy-pap | enable.auto.commit = true 09:35:32 policy-pap | exclude.internal.topics = true 09:35:32 policy-pap | fetch.max.bytes = 52428800 09:35:32 policy-pap | fetch.max.wait.ms = 500 09:35:32 policy-pap | fetch.min.bytes = 1 09:35:32 policy-pap | group.id = d445f4a2-e058-4282-8e5c-a34015c30918 09:35:32 policy-pap | group.instance.id = null 09:35:32 policy-pap | heartbeat.interval.ms = 3000 09:35:32 policy-pap | interceptor.classes = [] 09:35:32 policy-pap | internal.leave.group.on.close = true 09:35:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:35:32 policy-pap | isolation.level = read_uncommitted 09:35:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-pap | max.partition.fetch.bytes = 1048576 09:35:32 policy-pap | max.poll.interval.ms = 300000 09:35:32 policy-pap | max.poll.records = 500 09:35:32 policy-pap | metadata.max.age.ms = 300000 09:35:32 policy-pap | metric.reporters = [] 09:35:32 policy-pap | metrics.num.samples = 2 09:35:32 policy-pap | metrics.recording.level = INFO 09:35:32 policy-pap | metrics.sample.window.ms = 30000 09:35:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:35:32 policy-pap | receive.buffer.bytes = 65536 09:35:32 policy-pap | reconnect.backoff.max.ms = 1000 09:35:32 policy-pap | reconnect.backoff.ms = 50 09:35:32 policy-pap | request.timeout.ms = 30000 09:35:32 policy-pap | retry.backoff.ms = 100 09:35:32 policy-pap | sasl.client.callback.handler.class = null 09:35:32 policy-pap | sasl.jaas.config = null 09:35:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-pap | sasl.kerberos.service.name = null 09:35:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-pap | sasl.login.callback.handler.class = null 09:35:32 policy-pap | sasl.login.class = null 09:35:32 policy-pap | sasl.login.connect.timeout.ms = null 09:35:32 policy-pap | sasl.login.read.timeout.ms = null 09:35:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:35:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-pap | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-pap | sasl.login.retry.backoff.ms = 100 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.077+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.227+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: CJoIAc7kRTWMdkSfJOx8eQ 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.227+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Cluster ID: CJoIAc7kRTWMdkSfJOx8eQ 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.228+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.229+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.241+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] (Re-)joining group 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.258+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Request joining group due to: need to re-join with the given member-id: consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2-b62307a7-79c9-40ae-a084-8fa87fc4222b 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.258+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.258+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] (Re-)joining group 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.686+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 09:35:32 policy-apex-pdp | [2024-01-22T09:33:37.686+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 09:35:32 policy-apex-pdp | [2024-01-22T09:33:40.263+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Successfully joined group with generation Generation{generationId=1, memberId='consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2-b62307a7-79c9-40ae-a084-8fa87fc4222b', protocol='range'} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:40.268+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Finished assignment for group at generation 1: {consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2-b62307a7-79c9-40ae-a084-8fa87fc4222b=Assignment(partitions=[policy-pdp-pap-0])} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:40.276+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Successfully synced group in generation Generation{generationId=1, memberId='consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2-b62307a7-79c9-40ae-a084-8fa87fc4222b', protocol='range'} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:40.276+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:35:32 policy-apex-pdp | [2024-01-22T09:33:40.278+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Adding newly assigned partitions: policy-pdp-pap-0 09:35:32 policy-apex-pdp | [2024-01-22T09:33:40.284+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Found no committed offset for partition policy-pdp-pap-0 09:35:32 policy-apex-pdp | [2024-01-22T09:33:40.313+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2, groupId=864903c6-6b2d-49e1-b529-b1863a334e8b] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:35:32 policy-apex-pdp | [2024-01-22T09:33:56.146+00:00|INFO|RequestLog|qtp830863979-32] 172.17.0.5 - policyadmin [22/Jan/2024:09:33:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.49.1" 09:35:32 policy-apex-pdp | [2024-01-22T09:33:56.910+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32e11e13-d619-41c4-89b0-27a18f984326","timestampMs":1705916036910,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:56.930+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32e11e13-d619-41c4-89b0-27a18f984326","timestampMs":1705916036910,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:56.933+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.088+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","timestampMs":1705916037021,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.095+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.095+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f116a901-9c08-4bd7-b182-b5c55921c9f0","timestampMs":1705916037095,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.096+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3b36797-dede-40e6-a912-8f5d056ff824","timestampMs":1705916037096,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.108+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f116a901-9c08-4bd7-b182-b5c55921c9f0","timestampMs":1705916037095,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.108+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:35:32 policy-pap | sasl.mechanism = GSSAPI 09:35:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-pap | sasl.oauthbearer.expected.audience = null 09:35:32 policy-pap | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:35:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:35:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:35:32 policy-pap | security.protocol = PLAINTEXT 09:35:32 policy-pap | security.providers = null 09:35:32 policy-pap | send.buffer.bytes = 131072 09:35:32 policy-pap | session.timeout.ms = 45000 09:35:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:35:32 policy-pap | socket.connection.setup.timeout.ms = 10000 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.112+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3b36797-dede-40e6-a912-8f5d056ff824","timestampMs":1705916037096,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.112+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.150+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","timestampMs":1705916037022,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.153+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6fb83c35-d45e-49d3-92dc-50ca57031d47","timestampMs":1705916037153,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.161+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6fb83c35-d45e-49d3-92dc-50ca57031d47","timestampMs":1705916037153,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.162+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.186+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"34b780a3-ced9-4779-b152-c1da3c59f2ff","timestampMs":1705916037166,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.187+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"34b780a3-ced9-4779-b152-c1da3c59f2ff","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"fbceb791-c882-4044-a106-553d3943704f","timestampMs":1705916037187,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.194+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"34b780a3-ced9-4779-b152-c1da3c59f2ff","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"fbceb791-c882-4044-a106-553d3943704f","timestampMs":1705916037187,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-apex-pdp | [2024-01-22T09:33:57.195+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 09:35:32 policy-apex-pdp | [2024-01-22T09:34:56.078+00:00|INFO|RequestLog|qtp830863979-28] 172.17.0.5 - policyadmin [22/Jan/2024:09:34:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.49.1" 09:35:32 policy-db-migrator | Preparing upgrade release version: 1300 09:35:32 policy-db-migrator | Done 09:35:32 policy-db-migrator | name version 09:35:32 policy-db-migrator | policyadmin 0 09:35:32 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 09:35:32 policy-db-migrator | upgrade: 0 -> 1300 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 09:35:32 kafka | [2024-01-22 09:33:35,785] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,785] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,785] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,785] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,785] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,786] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,786] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,790] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,790] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,790] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,790] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,790] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,791] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,791] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,794] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,795] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,797] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,797] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,797] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,797] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,797] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,798] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,799] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,802] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,802] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,803] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,804] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,814] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,815] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,815] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,849] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 kafka | [2024-01-22 09:33:35,868] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:35,873] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 09:35:32 kafka | [2024-01-22 09:33:35,887] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 kafka | [2024-01-22 09:33:35,888] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(lXAMBo1cQ9-4W8GFf4jc4w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,945] INFO [Broker id=1] Finished LeaderAndIsr request in 191ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,945] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,945] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 09:35:32 simulator | overriding logback.xml 09:35:32 simulator | 2024-01-22 09:33:04,040 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 09:35:32 simulator | 2024-01-22 09:33:04,131 INFO org.onap.policy.models.simulators starting 09:35:32 simulator | 2024-01-22 09:33:04,131 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 09:35:32 simulator | 2024-01-22 09:33:04,405 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 09:35:32 simulator | 2024-01-22 09:33:04,406 INFO org.onap.policy.models.simulators starting A&AI simulator 09:35:32 simulator | 2024-01-22 09:33:04,524 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:35:32 simulator | 2024-01-22 09:33:04,534 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:04,537 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:04,547 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 09:35:32 simulator | 2024-01-22 09:33:04,628 INFO Session workerName=node0 09:35:32 simulator | 2024-01-22 09:33:05,110 INFO Using GSON for REST calls 09:35:32 simulator | 2024-01-22 09:33:05,174 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} 09:35:32 simulator | 2024-01-22 09:33:05,181 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 09:35:32 simulator | 2024-01-22 09:33:05,187 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1716ms 09:35:32 simulator | 2024-01-22 09:33:05,187 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4350 ms. 09:35:32 simulator | 2024-01-22 09:33:05,195 INFO org.onap.policy.models.simulators starting SDNC simulator 09:35:32 simulator | 2024-01-22 09:33:05,200 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:35:32 simulator | 2024-01-22 09:33:05,201 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:05,208 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:05,208 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 09:35:32 simulator | 2024-01-22 09:33:05,210 INFO Session workerName=node0 09:35:32 simulator | 2024-01-22 09:33:05,265 INFO Using GSON for REST calls 09:35:32 simulator | 2024-01-22 09:33:05,275 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} 09:35:32 simulator | 2024-01-22 09:33:05,276 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 09:35:32 simulator | 2024-01-22 09:33:05,277 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1806ms 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 simulator | 2024-01-22 09:33:05,277 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4929 ms. 09:35:32 simulator | 2024-01-22 09:33:05,278 INFO org.onap.policy.models.simulators starting SO simulator 09:35:32 simulator | 2024-01-22 09:33:05,283 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:35:32 simulator | 2024-01-22 09:33:05,283 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:05,284 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:05,286 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 09:35:32 simulator | 2024-01-22 09:33:05,289 INFO Session workerName=node0 09:35:32 simulator | 2024-01-22 09:33:05,338 INFO Using GSON for REST calls 09:35:32 simulator | 2024-01-22 09:33:05,351 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} 09:35:32 simulator | 2024-01-22 09:33:05,352 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 09:35:32 simulator | 2024-01-22 09:33:05,353 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @1882ms 09:35:32 simulator | 2024-01-22 09:33:05,353 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4931 ms. 09:35:32 simulator | 2024-01-22 09:33:05,356 INFO org.onap.policy.models.simulators starting VFC simulator 09:35:32 simulator | 2024-01-22 09:33:05,359 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 09:35:32 simulator | 2024-01-22 09:33:05,359 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:05,362 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 09:35:32 simulator | 2024-01-22 09:33:05,363 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 09:35:32 simulator | 2024-01-22 09:33:05,369 INFO Session workerName=node0 09:35:32 simulator | 2024-01-22 09:33:05,414 INFO Using GSON for REST calls 09:35:32 simulator | 2024-01-22 09:33:05,421 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} 09:35:32 simulator | 2024-01-22 09:33:05,423 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 09:35:32 simulator | 2024-01-22 09:33:05,423 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @1952ms 09:35:32 simulator | 2024-01-22 09:33:05,423 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4938 ms. 09:35:32 simulator | 2024-01-22 09:33:05,424 INFO org.onap.policy.models.simulators started 09:35:32 policy-pap | ssl.cipher.suites = null 09:35:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-pap | ssl.endpoint.identification.algorithm = https 09:35:32 policy-pap | ssl.engine.factory.class = null 09:35:32 policy-pap | ssl.key.password = null 09:35:32 policy-pap | ssl.keymanager.algorithm = SunX509 09:35:32 policy-pap | ssl.keystore.certificate.chain = null 09:35:32 policy-pap | ssl.keystore.key = null 09:35:32 policy-pap | ssl.keystore.location = null 09:35:32 policy-pap | ssl.keystore.password = null 09:35:32 policy-pap | ssl.keystore.type = JKS 09:35:32 policy-pap | ssl.protocol = TLSv1.3 09:35:32 policy-pap | ssl.provider = null 09:35:32 policy-pap | ssl.secure.random.implementation = null 09:35:32 policy-pap | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-pap | ssl.truststore.certificates = null 09:35:32 policy-pap | ssl.truststore.location = null 09:35:32 policy-pap | ssl.truststore.password = null 09:35:32 policy-pap | ssl.truststore.type = JKS 09:35:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-pap | 09:35:32 policy-pap | [2024-01-22T09:33:35.134+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-pap | [2024-01-22T09:33:35.134+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 policy-pap | [2024-01-22T09:33:35.135+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916015134 09:35:32 policy-pap | [2024-01-22T09:33:35.135+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Subscribed to topic(s): policy-pdp-pap 09:35:32 policy-pap | [2024-01-22T09:33:35.135+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 09:35:32 policy-pap | [2024-01-22T09:33:35.135+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d317926d-49e2-42c7-9164-a3354b87872e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@63d75087 09:35:32 policy-pap | [2024-01-22T09:33:35.135+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d317926d-49e2-42c7-9164-a3354b87872e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:35:32 policy-pap | [2024-01-22T09:33:35.136+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 09:35:32 policy-pap | allow.auto.create.topics = true 09:35:32 policy-pap | auto.commit.interval.ms = 5000 09:35:32 policy-pap | auto.include.jmx.reporter = true 09:35:32 policy-pap | auto.offset.reset = latest 09:35:32 policy-pap | bootstrap.servers = [kafka:9092] 09:35:32 policy-pap | check.crcs = true 09:35:32 policy-pap | client.dns.lookup = use_all_dns_ips 09:35:32 policy-pap | client.id = consumer-policy-pap-4 09:35:32 policy-pap | client.rack = 09:35:32 policy-pap | connections.max.idle.ms = 540000 09:35:32 policy-pap | default.api.timeout.ms = 60000 09:35:32 policy-pap | enable.auto.commit = true 09:35:32 policy-pap | exclude.internal.topics = true 09:35:32 policy-pap | fetch.max.bytes = 52428800 09:35:32 policy-pap | fetch.max.wait.ms = 500 09:35:32 policy-pap | fetch.min.bytes = 1 09:35:32 policy-pap | group.id = policy-pap 09:35:32 policy-pap | group.instance.id = null 09:35:32 policy-pap | heartbeat.interval.ms = 3000 09:35:32 policy-pap | interceptor.classes = [] 09:35:32 policy-pap | internal.leave.group.on.close = true 09:35:32 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 09:35:32 policy-pap | isolation.level = read_uncommitted 09:35:32 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 policy-pap | max.partition.fetch.bytes = 1048576 09:35:32 policy-pap | max.poll.interval.ms = 300000 09:35:32 policy-pap | max.poll.records = 500 09:35:32 policy-pap | metadata.max.age.ms = 300000 09:35:32 policy-pap | metric.reporters = [] 09:35:32 policy-pap | metrics.num.samples = 2 09:35:32 policy-pap | metrics.recording.level = INFO 09:35:32 policy-pap | metrics.sample.window.ms = 30000 09:35:32 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:35:32 policy-pap | receive.buffer.bytes = 65536 09:35:32 policy-pap | reconnect.backoff.max.ms = 1000 09:35:32 policy-pap | reconnect.backoff.ms = 50 09:35:32 policy-pap | request.timeout.ms = 30000 09:35:32 policy-pap | retry.backoff.ms = 100 09:35:32 policy-pap | sasl.client.callback.handler.class = null 09:35:32 policy-pap | sasl.jaas.config = null 09:35:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-pap | sasl.kerberos.service.name = null 09:35:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-pap | sasl.login.callback.handler.class = null 09:35:32 policy-pap | sasl.login.class = null 09:35:32 policy-pap | sasl.login.connect.timeout.ms = null 09:35:32 policy-pap | sasl.login.read.timeout.ms = null 09:35:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:35:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-pap | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-pap | sasl.login.retry.backoff.ms = 100 09:35:32 policy-pap | sasl.mechanism = GSSAPI 09:35:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-pap | sasl.oauthbearer.expected.audience = null 09:35:32 policy-pap | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,946] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,949] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=lXAMBo1cQ9-4W8GFf4jc4w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,950] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,950] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,950] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,950] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,950] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,950] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,950] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,951] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.619116181Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.282514ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.624079451Z level=info msg="Executing migration" id="add correlation config column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.636024664Z level=info msg="Migration successfully executed" id="add correlation config column" duration=11.944583ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.639415589Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.640899113Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.483814ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.644682413Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.645861334Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.180191ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.650679995Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.680855094Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.174859ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.68727267Z level=info msg="Executing migration" id="create correlation v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.688000347Z level=info msg="Migration successfully executed" id="create correlation v2" duration=727.337µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.690978298Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.693033129Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.054341ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.697454265Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.698680597Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.224622ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.702087242Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.703241544Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.152372ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.707182704Z level=info msg="Executing migration" id="copy correlation v1 to v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.707456627Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=274.183µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.710916462Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.711805242Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=888.74µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.716258897Z level=info msg="Executing migration" id="add provisioning column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.729030309Z level=info msg="Migration successfully executed" id="add provisioning column" duration=12.772272ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.734952809Z level=info msg="Executing migration" id="create entity_events table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.735734247Z level=info msg="Migration successfully executed" id="create entity_events table" duration=781.608µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.742271535Z level=info msg="Executing migration" id="create dashboard public config v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.743751909Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.479994ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.747848262Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.74863459Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.752489569Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.752992894Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.75641613Z level=info msg="Executing migration" id="Drop old dashboard public config table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.757550751Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.132921ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.760848895Z level=info msg="Executing migration" id="recreate dashboard public config v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.7623446Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.487676ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.766514053Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.767651155Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.137872ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.770771587Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.771915769Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.143611ms 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.77590998Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.777017271Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.107141ms 09:35:32 kafka | [2024-01-22 09:33:35,951] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.78275137Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:35:32 kafka | [2024-01-22 09:33:35,951] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.784519338Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.767158ms 09:35:32 kafka | [2024-01-22 09:33:35,951] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.788464078Z level=info msg="Executing migration" id="Drop public config table" 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.789704261Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.239552ms 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.794337938Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.795324728Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=986.68µs 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.802013597Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.80420525Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.189043ms 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.808869317Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.81002389Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.152623ms 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.814565986Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 09:35:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.815705168Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.139552ms 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | security.protocol = PLAINTEXT 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.818664959Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 09:35:32 kafka | [2024-01-22 09:33:35,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 09:35:32 policy-pap | security.providers = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.851229502Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=32.556733ms 09:35:32 kafka | [2024-01-22 09:33:35,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.855307995Z level=info msg="Executing migration" id="add annotations_enabled column" 09:35:32 kafka | [2024-01-22 09:33:35,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | send.buffer.bytes = 131072 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.861567708Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.259473ms 09:35:32 kafka | [2024-01-22 09:33:35,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | session.timeout.ms = 45000 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.865625561Z level=info msg="Executing migration" id="add time_selection_enabled column" 09:35:32 kafka | [2024-01-22 09:33:35,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 09:35:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.874936496Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.312455ms 09:35:32 kafka | [2024-01-22 09:33:35,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | socket.connection.setup.timeout.ms = 10000 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.880931047Z level=info msg="Executing migration" id="delete orphaned public dashboards" 09:35:32 kafka | [2024-01-22 09:33:35,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 09:35:32 policy-pap | ssl.cipher.suites = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.88126229Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=330.873µs 09:35:32 kafka | [2024-01-22 09:33:35,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.884579224Z level=info msg="Executing migration" id="add share column" 09:35:32 kafka | [2024-01-22 09:33:35,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.893552797Z level=info msg="Migration successfully executed" id="add share column" duration=8.972143ms 09:35:32 kafka | [2024-01-22 09:33:35,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.901261876Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 09:35:32 kafka | [2024-01-22 09:33:35,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.9016509Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=388.744µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.90650491Z level=info msg="Executing migration" id="create file table" 09:35:32 kafka | [2024-01-22 09:33:35,962] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.907990605Z level=info msg="Migration successfully executed" id="create file table" duration=1.483385ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.912183288Z level=info msg="Executing migration" id="file table idx: path natural pk" 09:35:32 kafka | [2024-01-22 09:33:35,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.913477842Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.296514ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.917347171Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 09:35:32 kafka | [2024-01-22 09:33:35,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.918639895Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.291794ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.923449383Z level=info msg="Executing migration" id="create file_meta table" 09:35:32 kafka | [2024-01-22 09:33:35,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.924290813Z level=info msg="Migration successfully executed" id="create file_meta table" duration=840.93µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.927149972Z level=info msg="Executing migration" id="file table idx: path key" 09:35:32 kafka | [2024-01-22 09:33:35,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.929654507Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.502215ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.935652799Z level=info msg="Executing migration" id="set path collation in file table" 09:35:32 kafka | [2024-01-22 09:33:35,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 09:35:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.935813511Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=131.732µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.940585789Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 09:35:32 kafka | [2024-01-22 09:33:35,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 09:35:32 policy-pap | ssl.endpoint.identification.algorithm = https 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.940791851Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=205.592µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.946827794Z level=info msg="Executing migration" id="managed permissions migration" 09:35:32 kafka | [2024-01-22 09:33:35,963] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.947803824Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=976.45µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.954491852Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 09:35:32 kafka | [2024-01-22 09:33:35,964] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.954783575Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=291.473µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.96015694Z level=info msg="Executing migration" id="RBAC action name migrator" 09:35:32 kafka | [2024-01-22 09:33:35,964] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.961637076Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.479896ms 09:35:32 kafka | [2024-01-22 09:33:35,964] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.engine.factory.class = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.965121691Z level=info msg="Executing migration" id="Add UID column to playlist" 09:35:32 kafka | [2024-01-22 09:33:35,964] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 policy-pap | ssl.key.password = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.974502387Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.380296ms 09:35:32 kafka | [2024-01-22 09:33:35,965] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.keymanager.algorithm = SunX509 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.977857013Z level=info msg="Executing migration" id="Update uid column values in playlist" 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.keystore.certificate.chain = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.978029064Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=171.651µs 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.980893734Z level=info msg="Executing migration" id="Add index for uid in playlist" 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0450-pdpgroup.sql 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.981790372Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=895.948µs 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.985657092Z level=info msg="Executing migration" id="update group index for alert rules" 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.986450611Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=792.149µs 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.989657383Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.keystore.key = null 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.989880685Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=222.882µs 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.keystore.location = null 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.994181579Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 09:35:32 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 09:35:32 policy-pap | ssl.keystore.password = null 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.994882427Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=701.778µs 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.keystore.type = JKS 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:09.999985159Z level=info msg="Executing migration" id="add action column to seed_assignment" 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 policy-pap | ssl.protocol = TLSv1.3 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.010642257Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=10.658358ms 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.provider = null 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.014981052Z level=info msg="Executing migration" id="add scope column to seed_assignment" 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.secure.random.implementation = null 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.022098165Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.115913ms 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.trustmanager.algorithm = PKIX 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.025256108Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 09:35:32 policy-db-migrator | > upgrade 0470-pdp.sql 09:35:32 policy-pap | ssl.truststore.certificates = null 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.026392119Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.135961ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.02939312Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.138823672Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=109.426252ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.142871754Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.143796103Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=924.219µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.146596292Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 09:35:32 policy-pap | ssl.truststore.location = null 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.14746174Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=865.068µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.152481062Z level=info msg="Executing migration" id="add primary key to seed_assigment" 09:35:32 policy-pap | ssl.truststore.password = null 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.188251829Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=35.772667ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.192455661Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.192732874Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=277.933µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.19619034Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 09:35:32 policy-pap | ssl.truststore.type = JKS 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.196478853Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=288.293µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.200436813Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 09:35:32 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:35:32 kafka | [2024-01-22 09:33:35,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.200839237Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=400.424µs 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.204818799Z level=info msg="Executing migration" id="create folder table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.206363724Z level=info msg="Migration successfully executed" id="create folder table" duration=1.543795ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.21080216Z level=info msg="Executing migration" id="Add index for parent_uid" 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.212013172Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.210602ms 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.216155565Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.217493798Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.338273ms 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916015140 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.222038165Z level=info msg="Executing migration" id="Update folder title length" 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.222063955Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.61µs 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|ServiceManager|main] Policy PAP starting topics 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.226732252Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d317926d-49e2-42c7-9164-a3354b87872e, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.227944245Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.211883ms 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d445f4a2-e058-4282-8e5c-a34015c30918, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.231077107Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.232172048Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.094941ms 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.140+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7d572172-1cce-42ae-9217-8134b589dbf2, alive=false, publisher=null]]: starting 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.235757216Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.236979218Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.221662ms 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.239866577Z level=info msg="Executing migration" id="create anon_device table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.240805848Z level=info msg="Migration successfully executed" id="create anon_device table" duration=938.581µs 09:35:32 policy-pap | [2024-01-22T09:33:35.156+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.244751558Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.24601971Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.268292ms 09:35:32 policy-pap | acks = -1 09:35:32 kafka | [2024-01-22 09:33:35,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.249226594Z level=info msg="Executing migration" id="add index anon_device.updated_at" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.251535757Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.309563ms 09:35:32 policy-pap | auto.include.jmx.reporter = true 09:35:32 kafka | [2024-01-22 09:33:35,968] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.254865982Z level=info msg="Executing migration" id="create signing_key table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.25571216Z level=info msg="Migration successfully executed" id="create signing_key table" duration=846.128µs 09:35:32 kafka | [2024-01-22 09:33:35,968] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 09:35:32 kafka | [2024-01-22 09:33:35,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 policy-pap | batch.size = 16384 09:35:32 kafka | [2024-01-22 09:33:35,971] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-pap | bootstrap.servers = [kafka:9092] 09:35:32 kafka | [2024-01-22 09:33:35,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 09:35:32 policy-pap | buffer.memory = 33554432 09:35:32 kafka | [2024-01-22 09:33:35,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 09:35:32 kafka | [2024-01-22 09:33:35,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,977] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 09:35:32 kafka | [2024-01-22 09:33:35,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 09:35:32 kafka | [2024-01-22 09:33:35,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 kafka | [2024-01-22 09:33:35,981] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.259354717Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.26053247Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.177613ms 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-pap | client.dns.lookup = use_all_dns_ips 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.265366549Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.267530531Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.167722ms 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.274000477Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.27431885Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=322.143µs 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-pap | client.id = producer-1 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.278440813Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.291678019Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.237706ms 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-pap | compression.type = none 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.294802841Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.295321316Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=519.095µs 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.298567659Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.299694621Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.126792ms 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.303866704Z level=info msg="Executing migration" id="create sso_setting table" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.304832643Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=965.289µs 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.311404741Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.312717214Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.314603ms 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.32112987Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.321581785Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=453.565µs 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=migrator t=2024-01-22T09:33:10.326164502Z level=info msg="migrations completed" performed=523 skipped=0 duration=3.704555785s 09:35:32 grafana | logger=sqlstore t=2024-01-22T09:33:10.33868712Z level=info msg="Created default admin" user=admin 09:35:32 kafka | [2024-01-22 09:33:35,982] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 grafana | logger=sqlstore t=2024-01-22T09:33:10.338999454Z level=info msg="Created default organization" 09:35:32 grafana | logger=secrets t=2024-01-22T09:33:10.343456239Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 grafana | logger=plugin.store t=2024-01-22T09:33:10.367806349Z level=info msg="Loading plugins..." 09:35:32 grafana | logger=local.finder t=2024-01-22T09:33:10.409241544Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=plugin.store t=2024-01-22T09:33:10.409298745Z level=info msg="Plugins loaded" count=55 duration=41.494056ms 09:35:32 grafana | logger=query_data t=2024-01-22T09:33:10.412843971Z level=info msg="Query Service initialization" 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=live.push_http t=2024-01-22T09:33:10.416422427Z level=info msg="Live Push Gateway initialization" 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 grafana | logger=ngalert.migration t=2024-01-22T09:33:10.424031196Z level=info msg=Starting 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 09:35:32 grafana | logger=ngalert.migration orgID=1 t=2024-01-22T09:33:10.425692263Z level=info msg="Migrating alerts for organisation" 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 grafana | logger=ngalert.migration orgID=1 t=2024-01-22T09:33:10.426032926Z level=info msg="Alerts found to migrate" alerts=0 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 09:35:32 grafana | logger=ngalert.migration orgID=1 t=2024-01-22T09:33:10.42640817Z level=warn msg="No available receivers" 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-22T09:33:10.431356521Z level=info msg="Completed legacy migration" 09:35:32 grafana | logger=infra.usagestats.collector t=2024-01-22T09:33:10.472956137Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=provisioning.datasources t=2024-01-22T09:33:10.475816146Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 09:35:32 grafana | logger=provisioning.alerting t=2024-01-22T09:33:10.493871162Z level=info msg="starting to provision alerting" 09:35:32 policy-pap | connections.max.idle.ms = 540000 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=provisioning.alerting t=2024-01-22T09:33:10.493894262Z level=info msg="finished to provision alerting" 09:35:32 grafana | logger=grafanaStorageLogger t=2024-01-22T09:33:10.494375297Z level=info msg="Storage starting" 09:35:32 policy-pap | delivery.timeout.ms = 120000 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=ngalert.state.manager t=2024-01-22T09:33:10.495191585Z level=info msg="Warming state cache for startup" 09:35:32 grafana | logger=ngalert.multiorg.alertmanager t=2024-01-22T09:33:10.495209555Z level=info msg="Starting MultiOrg Alertmanager" 09:35:32 policy-pap | enable.idempotence = true 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=http.server t=2024-01-22T09:33:10.497908303Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 09:35:32 grafana | logger=ngalert.state.manager t=2024-01-22T09:33:10.558326842Z level=info msg="State cache has been initialized" states=0 duration=63.133507ms 09:35:32 policy-pap | interceptor.classes = [] 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=ngalert.scheduler t=2024-01-22T09:33:10.558373963Z level=info msg="Starting scheduler" tickInterval=10s 09:35:32 grafana | logger=ticker t=2024-01-22T09:33:10.558563565Z level=info msg=starting first_tick=2024-01-22T09:33:20Z 09:35:32 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:35:32 grafana | logger=plugins.update.checker t=2024-01-22T09:33:10.59808368Z level=info msg="Update check succeeded" duration=103.598772ms 09:35:32 grafana | logger=sqlstore.transactions t=2024-01-22T09:33:10.691328855Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=sqlstore.transactions t=2024-01-22T09:33:10.702616521Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 09:35:32 grafana | logger=sqlstore.transactions t=2024-01-22T09:33:10.714165039Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 grafana | logger=grafana.update.checker t=2024-01-22T09:33:10.724830189Z level=info msg="Update check succeeded" duration=230.534733ms 09:35:32 grafana | logger=infra.usagestats t=2024-01-22T09:35:09.506961587Z level=info msg="Usage stats are ready to report" 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-pap | linger.ms = 0 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-pap | max.block.ms = 60000 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 09:35:32 kafka | [2024-01-22 09:33:35,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-pap | max.in.flight.requests.per.connection = 5 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-pap | max.request.size = 1048576 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0570-toscadatatype.sql 09:35:32 kafka | [2024-01-22 09:33:35,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 09:35:32 policy-pap | metadata.max.age.ms = 300000 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | metadata.max.idle.ms = 300000 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | metric.reporters = [] 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | metrics.num.samples = 2 09:35:32 kafka | [2024-01-22 09:33:35,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | metrics.recording.level = INFO 09:35:32 kafka | [2024-01-22 09:33:35,983] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 09:35:32 policy-pap | metrics.sample.window.ms = 30000 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 policy-pap | partitioner.adaptive.partitioning.enable = true 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | partitioner.availability.timeout.ms = 0 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,996] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | partitioner.class = null 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 09:35:32 policy-pap | partitioner.ignore.keys = false 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 policy-pap | receive.buffer.bytes = 32768 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | reconnect.backoff.max.ms = 1000 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | reconnect.backoff.ms = 50 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0630-toscanodetype.sql 09:35:32 policy-pap | request.timeout.ms = 30000 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | retries = 2147483647 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 09:35:32 policy-pap | retry.backoff.ms = 100 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.client.callback.handler.class = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.jaas.config = null 09:35:32 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,997] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.kerberos.service.name = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,998] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:35,998] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 09:35:32 kafka | [2024-01-22 09:33:35,998] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,998] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.login.callback.handler.class = null 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 kafka | [2024-01-22 09:33:35,998] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 09:35:32 policy-pap | sasl.login.class = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:35,998] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,009] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,009] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0660-toscaparameter.sql 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:35:32 policy-pap | sasl.login.connect.timeout.ms = null 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:35:32 policy-pap | sasl.login.read.timeout.ms = null 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0670-toscapolicies.sql 09:35:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | sasl.login.refresh.window.factor = 0.8 09:35:32 kafka | [2024-01-22 09:33:36,011] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:35:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:35:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:35:32 policy-pap | sasl.login.retry.backoff.ms = 100 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:35:32 policy-pap | sasl.mechanism = GSSAPI 09:35:32 policy-db-migrator | > upgrade 0690-toscapolicy.sql 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:35:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.oauthbearer.expected.audience = null 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 09:35:32 policy-pap | sasl.oauthbearer.expected.issuer = null 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 kafka | [2024-01-22 09:33:36,012] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | security.protocol = PLAINTEXT 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0730-toscaproperty.sql 09:35:32 policy-pap | security.providers = null 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | send.buffer.bytes = 131072 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 kafka | [2024-01-22 09:33:36,013] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,018] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,018] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,021] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 09:35:32 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 09:35:32 kafka | [2024-01-22 09:33:36,022] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,030] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 09:35:32 kafka | [2024-01-22 09:33:36,031] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,031] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:35:32 kafka | [2024-01-22 09:33:36,034] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | socket.connection.setup.timeout.ms = 10000 09:35:32 kafka | [2024-01-22 09:33:36,034] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 09:35:32 policy-pap | ssl.cipher.suites = null 09:35:32 kafka | [2024-01-22 09:33:36,042] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,047] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,047] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,048] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 09:35:32 kafka | [2024-01-22 09:33:36,048] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | ssl.endpoint.identification.algorithm = https 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,059] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 kafka | [2024-01-22 09:33:36,060] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | ssl.engine.factory.class = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,060] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.key.password = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,060] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,060] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | ssl.keymanager.algorithm = SunX509 09:35:32 policy-db-migrator | > upgrade 0770-toscarequirement.sql 09:35:32 kafka | [2024-01-22 09:33:36,069] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | ssl.keystore.certificate.chain = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,070] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | ssl.keystore.key = null 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 09:35:32 kafka | [2024-01-22 09:33:36,070] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.keystore.location = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,070] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.keystore.password = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,070] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,082] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | ssl.keystore.type = JKS 09:35:32 policy-db-migrator | > upgrade 0780-toscarequirements.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,082] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 09:35:32 kafka | [2024-01-22 09:33:36,082] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.protocol = TLSv1.3 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,082] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.provider = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,082] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | ssl.secure.random.implementation = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,089] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 09:35:32 kafka | [2024-01-22 09:33:36,090] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,090] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 09:35:32 kafka | [2024-01-22 09:33:36,090] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,090] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,102] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,102] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | ssl.truststore.certificates = null 09:35:32 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 09:35:32 kafka | [2024-01-22 09:33:36,102] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,102] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.truststore.location = null 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 09:35:32 kafka | [2024-01-22 09:33:36,102] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | ssl.truststore.password = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,112] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,113] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | ssl.truststore.type = JKS 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,113] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 09:35:32 policy-pap | transaction.timeout.ms = 60000 09:35:32 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 09:35:32 kafka | [2024-01-22 09:33:36,113] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,113] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 kafka | [2024-01-22 09:33:36,122] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,122] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,122] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0820-toscatrigger.sql 09:35:32 kafka | [2024-01-22 09:33:36,122] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,122] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:36,137] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | transactional.id = null 09:35:32 kafka | [2024-01-22 09:33:36,138] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 09:35:32 kafka | [2024-01-22 09:33:36,139] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,139] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:35:32 kafka | [2024-01-22 09:33:36,140] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | 09:35:32 kafka | [2024-01-22 09:33:36,153] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 09:35:32 policy-pap | [2024-01-22T09:33:35.166+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 09:35:32 kafka | [2024-01-22 09:33:36,154] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,154] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 09:35:32 policy-pap | [2024-01-22T09:33:35.181+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 kafka | [2024-01-22 09:33:36,154] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | [2024-01-22T09:33:35.182+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 kafka | [2024-01-22 09:33:36,155] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,164] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,165] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 09:35:32 policy-pap | [2024-01-22T09:33:35.182+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916015181 09:35:32 kafka | [2024-01-22 09:33:36,165] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | [2024-01-22T09:33:35.182+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=7d572172-1cce-42ae-9217-8134b589dbf2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:35:32 kafka | [2024-01-22 09:33:36,166] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 09:35:32 policy-pap | [2024-01-22T09:33:35.182+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8574506d-46f3-447e-ba50-cae2421e4a96, alive=false, publisher=null]]: starting 09:35:32 kafka | [2024-01-22 09:33:36,166] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,182] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | [2024-01-22T09:33:35.182+00:00|INFO|ProducerConfig|main] ProducerConfig values: 09:35:32 kafka | [2024-01-22 09:33:36,183] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | acks = -1 09:35:32 kafka | [2024-01-22 09:33:36,183] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 09:35:32 kafka | [2024-01-22 09:33:36,183] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | auto.include.jmx.reporter = true 09:35:32 kafka | [2024-01-22 09:33:36,183] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 09:35:32 policy-pap | batch.size = 16384 09:35:32 kafka | [2024-01-22 09:33:36,197] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | bootstrap.servers = [kafka:9092] 09:35:32 kafka | [2024-01-22 09:33:36,198] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,198] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,198] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,198] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | buffer.memory = 33554432 09:35:32 kafka | [2024-01-22 09:33:36,206] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 09:35:32 policy-pap | client.dns.lookup = use_all_dns_ips 09:35:32 kafka | [2024-01-22 09:33:36,207] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:36,207] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,207] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,207] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,215] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,216] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 09:35:32 policy-pap | client.id = producer-2 09:35:32 kafka | [2024-01-22 09:33:36,216] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,216] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,216] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,223] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 09:35:32 policy-pap | compression.type = none 09:35:32 kafka | [2024-01-22 09:33:36,223] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | connections.max.idle.ms = 540000 09:35:32 kafka | [2024-01-22 09:33:36,223] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 09:35:32 kafka | [2024-01-22 09:33:36,224] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,224] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,230] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,231] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,231] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,231] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 09:35:32 kafka | [2024-01-22 09:33:36,231] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,237] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,237] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,238] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,238] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,238] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 09:35:32 kafka | [2024-01-22 09:33:36,244] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,245] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,245] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | delivery.timeout.ms = 120000 09:35:32 kafka | [2024-01-22 09:33:36,245] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:35:32 policy-pap | enable.idempotence = true 09:35:32 kafka | [2024-01-22 09:33:36,245] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | interceptor.classes = [] 09:35:32 kafka | [2024-01-22 09:33:36,252] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 09:35:32 kafka | [2024-01-22 09:33:36,252] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,252] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,252] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:35:32 kafka | [2024-01-22 09:33:36,252] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 09:35:32 policy-pap | linger.ms = 0 09:35:32 kafka | [2024-01-22 09:33:36,259] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,260] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 09:35:32 kafka | [2024-01-22 09:33:36,260] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,260] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,260] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,268] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 09:35:32 kafka | [2024-01-22 09:33:36,268] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,268] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 09:35:32 kafka | [2024-01-22 09:33:36,268] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,268] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | max.block.ms = 60000 09:35:32 kafka | [2024-01-22 09:33:36,274] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:35:32 policy-pap | max.in.flight.requests.per.connection = 5 09:35:32 kafka | [2024-01-22 09:33:36,274] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | max.request.size = 1048576 09:35:32 kafka | [2024-01-22 09:33:36,274] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 09:35:32 policy-pap | metadata.max.age.ms = 300000 09:35:32 kafka | [2024-01-22 09:33:36,274] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,274] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,281] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,282] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,282] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,282] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 kafka | [2024-01-22 09:33:36,282] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,292] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,293] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,293] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 09:35:32 kafka | [2024-01-22 09:33:36,293] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | metadata.max.idle.ms = 300000 09:35:32 kafka | [2024-01-22 09:33:36,294] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 kafka | [2024-01-22 09:33:36,308] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | metric.reporters = [] 09:35:32 kafka | [2024-01-22 09:33:36,309] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | metrics.num.samples = 2 09:35:32 kafka | [2024-01-22 09:33:36,309] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | metrics.recording.level = INFO 09:35:32 kafka | [2024-01-22 09:33:36,309] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 09:35:32 policy-pap | metrics.sample.window.ms = 30000 09:35:32 kafka | [2024-01-22 09:33:36,309] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | partitioner.adaptive.partitioning.enable = true 09:35:32 kafka | [2024-01-22 09:33:36,315] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 policy-pap | partitioner.availability.timeout.ms = 0 09:35:32 kafka | [2024-01-22 09:33:36,316] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,316] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,316] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,316] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,322] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,323] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 kafka | [2024-01-22 09:33:36,324] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,324] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,324] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,330] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,331] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,331] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,331] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | partitioner.class = null 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,331] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | partitioner.ignore.keys = false 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,340] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | receive.buffer.bytes = 32768 09:35:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,341] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | reconnect.backoff.max.ms = 1000 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,341] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 09:35:32 policy-pap | reconnect.backoff.ms = 50 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,341] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,342] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,348] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 kafka | [2024-01-22 09:33:36,349] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,349] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,349] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,349] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,356] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | request.timeout.ms = 30000 09:35:32 kafka | [2024-01-22 09:33:36,357] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:36,357] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 policy-pap | retries = 2147483647 09:35:32 kafka | [2024-01-22 09:33:36,357] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | retry.backoff.ms = 100 09:35:32 kafka | [2024-01-22 09:33:36,358] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.client.callback.handler.class = null 09:35:32 kafka | [2024-01-22 09:33:36,364] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.jaas.config = null 09:35:32 kafka | [2024-01-22 09:33:36,365] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 09:35:32 kafka | [2024-01-22 09:33:36,365] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:35:32 kafka | [2024-01-22 09:33:36,365] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 09:35:32 kafka | [2024-01-22 09:33:36,365] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,371] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.kerberos.service.name = null 09:35:32 kafka | [2024-01-22 09:33:36,372] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 09:35:32 kafka | [2024-01-22 09:33:36,372] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 09:35:32 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 09:35:32 kafka | [2024-01-22 09:33:36,372] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,373] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 kafka | [2024-01-22 09:33:36,381] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | sasl.login.callback.handler.class = null 09:35:32 kafka | [2024-01-22 09:33:36,382] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.login.class = null 09:35:32 kafka | [2024-01-22 09:33:36,383] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,383] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 09:35:32 kafka | [2024-01-22 09:33:36,383] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | sasl.login.connect.timeout.ms = null 09:35:32 kafka | [2024-01-22 09:33:36,394] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 kafka | [2024-01-22 09:33:36,395] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 09:35:32 kafka | [2024-01-22 09:33:36,395] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-pap | sasl.login.read.timeout.ms = null 09:35:32 kafka | [2024-01-22 09:33:36,395] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,396] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0100-pdp.sql 09:35:32 kafka | [2024-01-22 09:33:36,405] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,406] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 09:35:32 kafka | [2024-01-22 09:33:36,407] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,407] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,407] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,415] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:35:32 policy-pap | sasl.login.refresh.buffer.seconds = 300 09:35:32 kafka | [2024-01-22 09:33:36,415] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,415] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,415] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | sasl.login.refresh.min.period.seconds = 60 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,416] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | sasl.login.refresh.window.factor = 0.8 09:35:32 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,424] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | sasl.login.refresh.window.jitter = 0.05 09:35:32 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,424] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | sasl.login.retry.backoff.max.ms = 10000 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,424] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 09:35:32 policy-pap | sasl.login.retry.backoff.ms = 100 09:35:32 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 09:35:32 kafka | [2024-01-22 09:33:36,424] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | sasl.mechanism = GSSAPI 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,424] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 09:35:32 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 09:35:32 kafka | [2024-01-22 09:33:36,431] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | sasl.oauthbearer.expected.audience = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,431] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,432] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,432] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | sasl.oauthbearer.expected.issuer = null 09:35:32 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 09:35:32 kafka | [2024-01-22 09:33:36,432] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,439] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:35:32 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 09:35:32 kafka | [2024-01-22 09:33:36,440] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,440] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,440] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,440] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:35:32 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 09:35:32 kafka | [2024-01-22 09:33:36,448] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,449] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,449] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,449] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 09:35:32 kafka | [2024-01-22 09:33:36,449] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,456] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | sasl.oauthbearer.scope.claim.name = scope 09:35:32 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 09:35:32 kafka | [2024-01-22 09:33:36,457] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | sasl.oauthbearer.sub.claim.name = sub 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,457] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 09:35:32 policy-pap | sasl.oauthbearer.token.endpoint.url = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,457] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | security.protocol = PLAINTEXT 09:35:32 kafka | [2024-01-22 09:33:36,457] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 09:35:32 policy-pap | security.providers = null 09:35:32 kafka | [2024-01-22 09:33:36,464] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,465] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:36,465] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 09:35:32 kafka | [2024-01-22 09:33:36,465] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | send.buffer.bytes = 131072 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,465] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,475] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 09:35:32 kafka | [2024-01-22 09:33:36,475] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | socket.connection.setup.timeout.max.ms = 30000 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,475] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 09:35:32 policy-pap | socket.connection.setup.timeout.ms = 10000 09:35:32 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 09:35:32 kafka | [2024-01-22 09:33:36,475] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | JOIN pdpstatistics b 09:35:32 kafka | [2024-01-22 09:33:36,476] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 09:35:32 kafka | [2024-01-22 09:33:36,485] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-pap | ssl.cipher.suites = null 09:35:32 policy-db-migrator | SET a.id = b.id 09:35:32 kafka | [2024-01-22 09:33:36,486] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,486] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,490] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-pap | ssl.endpoint.identification.algorithm = https 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,490] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-pap | ssl.engine.factory.class = null 09:35:32 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 09:35:32 kafka | [2024-01-22 09:33:36,497] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,497] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 kafka | [2024-01-22 09:33:36,497] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 09:35:32 policy-pap | ssl.key.password = null 09:35:32 kafka | [2024-01-22 09:33:36,498] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 kafka | [2024-01-22 09:33:36,498] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.keymanager.algorithm = SunX509 09:35:32 kafka | [2024-01-22 09:33:36,505] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.keystore.certificate.chain = null 09:35:32 kafka | [2024-01-22 09:33:36,506] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.keystore.key = null 09:35:32 kafka | [2024-01-22 09:33:36,506] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 09:35:32 policy-pap | ssl.keystore.location = null 09:35:32 kafka | [2024-01-22 09:33:36,506] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,506] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 09:35:32 policy-pap | ssl.keystore.password = null 09:35:32 kafka | [2024-01-22 09:33:36,513] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 09:35:32 kafka | [2024-01-22 09:33:36,513] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.keystore.type = JKS 09:35:32 kafka | [2024-01-22 09:33:36,513] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.protocol = TLSv1.3 09:35:32 kafka | [2024-01-22 09:33:36,513] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,514] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(DwQp2N8YQFWy2VDhXLnyoQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 09:35:32 kafka | [2024-01-22 09:33:36,516] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,516] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 09:35:32 policy-pap | ssl.provider = null 09:35:32 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 09:35:32 kafka | [2024-01-22 09:33:36,516] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-pap | ssl.secure.random.implementation = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0210-sequence.sql 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 09:35:32 policy-pap | ssl.trustmanager.algorithm = PKIX 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 09:35:32 policy-pap | ssl.truststore.certificates = null 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0220-sequence.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 09:35:32 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.truststore.location = null 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 09:35:32 policy-pap | ssl.truststore.password = null 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 09:35:32 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-pap | ssl.truststore.type = JKS 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-pap | transaction.timeout.ms = 60000 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 09:35:32 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 09:35:32 policy-pap | transactional.id = null 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0120-toscatrigger.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 09:35:32 policy-pap | 09:35:32 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 09:35:32 kafka | [2024-01-22 09:33:36,517] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.183+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.186+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.186+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.186+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705916015186 09:35:32 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.186+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8574506d-46f3-447e-ba50-cae2421e4a96, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.186+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 09:35:32 policy-db-migrator | > upgrade 0140-toscaparameter.sql 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.186+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 09:35:32 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | 09:35:32 policy-pap | [2024-01-22T09:33:35.189+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | > upgrade 0150-toscaproperty.sql 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.189+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 09:35:32 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.194+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.194+00:00|INFO|TimerManager|Thread-9] timer manager update started 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.194+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 09:35:32 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.194+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.195+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:35.195+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,518] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 09:35:32 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 09:35:32 kafka | [2024-01-22 09:33:36,520] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.197+00:00|INFO|ServiceManager|main] Policy PAP started 09:35:32 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:35.198+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.109 seconds (process running for 10.709) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.626+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: CJoIAc7kRTWMdkSfJOx8eQ 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:35.626+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: CJoIAc7kRTWMdkSfJOx8eQ 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.627+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 09:35:32 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | > upgrade 0100-upgrade.sql 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | select 'upgrade to 1100 completed' as msg 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | msg 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | upgrade to 1100 completed 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 09:35:32 kafka | [2024-01-22 09:33:36,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | > upgrade 0120-audit_sequence.sql 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | TRUNCATE TABLE sequence 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | -------------- 09:35:32 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:35.627+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: CJoIAc7kRTWMdkSfJOx8eQ 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.669+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | DROP TABLE pdpstatistics 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.669+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 09:35:32 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:35.675+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | DROP TABLE statistics_sequence 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:35.675+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Cluster ID: CJoIAc7kRTWMdkSfJOx8eQ 09:35:32 policy-db-migrator | -------------- 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.730+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:35:32 policy-db-migrator | 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:35.798+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:35:32 policy-db-migrator | policyadmin: OK: upgrade (1300) 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.852+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:35:32 policy-db-migrator | name version 09:35:32 kafka | [2024-01-22 09:33:36,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | policyadmin 1300 09:35:32 policy-db-migrator | ID script operation from_version to_version tag success atTime 09:35:32 kafka | [2024-01-22 09:33:36,531] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:35.919+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:35:32 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:35.967+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 09:35:32 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:36.544+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:35:32 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [Broker id=1] Finished LeaderAndIsr request in 551ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:36.551+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] (Re-)joining group 09:35:32 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:36.585+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Request joining group due to: need to re-join with the given member-id: consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3-ef886913-38dd-4a20-825d-5029b93d64b6 09:35:32 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:05 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,532] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:36.585+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:35:32 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:36.585+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] (Re-)joining group 09:35:32 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,533] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,534] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,534] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,534] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,534] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,534] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:36.591+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 09:35:32 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,535] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=DwQp2N8YQFWy2VDhXLnyoQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:35:32 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:06 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:36.594+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:35:32 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:36.600+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-c8d8c2ec-52c5-47de-a087-7af3258e8ed3 09:35:32 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:36.600+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:35:32 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:36.600+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 09:35:32 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.610+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Successfully joined group with generation Generation{generationId=1, memberId='consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3-ef886913-38dd-4a20-825d-5029b93d64b6', protocol='range'} 09:35:32 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.613+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-c8d8c2ec-52c5-47de-a087-7af3258e8ed3', protocol='range'} 09:35:32 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,537] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Finished assignment for group at generation 1: {consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3-ef886913-38dd-4a20-825d-5029b93d64b6=Assignment(partitions=[policy-pdp-pap-0])} 09:35:32 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:07 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.622+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-c8d8c2ec-52c5-47de-a087-7af3258e8ed3=Assignment(partitions=[policy-pdp-pap-0])} 09:35:32 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.648+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-c8d8c2ec-52c5-47de-a087-7af3258e8ed3', protocol='range'} 09:35:32 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.649+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:35:32 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.652+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Successfully synced group in generation Generation{generationId=1, memberId='consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3-ef886913-38dd-4a20-825d-5029b93d64b6', protocol='range'} 09:35:32 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-pap | [2024-01-22T09:33:39.653+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 09:35:32 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,538] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 policy-pap | [2024-01-22T09:33:39.654+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Adding newly assigned partitions: policy-pdp-pap-0 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 policy-pap | [2024-01-22T09:33:39.654+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 policy-pap | [2024-01-22T09:33:39.673+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 policy-pap | [2024-01-22T09:33:39.673+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Found no committed offset for partition policy-pdp-pap-0 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 policy-pap | [2024-01-22T09:33:39.689+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 policy-pap | [2024-01-22T09:33:39.689+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3, groupId=d445f4a2-e058-4282-8e5c-a34015c30918] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 09:35:32 kafka | [2024-01-22 09:33:36,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:08 09:35:32 policy-pap | [2024-01-22T09:33:41.588+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 09:35:32 kafka | [2024-01-22 09:33:36,539] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 09:35:32 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,541] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 09:35:32 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,545] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 19 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,545] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 policy-pap | [2024-01-22T09:33:41.588+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 09:35:32 kafka | [2024-01-22 09:33:36,545] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 policy-pap | [2024-01-22T09:33:41.590+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 09:35:32 kafka | [2024-01-22 09:33:36,545] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 policy-pap | [2024-01-22T09:33:56.948+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 09:35:32 kafka | [2024-01-22 09:33:36,545] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 policy-pap | [] 09:35:32 kafka | [2024-01-22 09:33:36,545] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,546] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:56.949+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,546] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32e11e13-d619-41c4-89b0-27a18f984326","timestampMs":1705916036910,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,546] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,546] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:56.949+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,546] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32e11e13-d619-41c4-89b0-27a18f984326","timestampMs":1705916036910,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,546] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,546] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2201240933050800u 1 2024-01-22 09:33:09 09:35:32 policy-pap | [2024-01-22T09:33:56.959+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:35:32 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:57.037+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting 09:35:32 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 20 milliseconds for epoch 0, of which 20 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:57.037+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting listener 09:35:32 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:09 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:57.037+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting timer 09:35:32 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:57.038+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=cd4be9ef-f8d3-4717-ae63-429516ff01bb, expireMs=1705916067038] 09:35:32 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-pap | [2024-01-22T09:33:57.039+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting enqueue 09:35:32 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,547] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 09:35:32 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,580] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group d445f4a2-e058-4282-8e5c-a34015c30918 in Empty state. Created a new member id consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3-ef886913-38dd-4a20-825d-5029b93d64b6 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:35:32 kafka | [2024-01-22 09:33:36,597] INFO [GroupCoordinator 1]: Preparing to rebalance group d445f4a2-e058-4282-8e5c-a34015c30918 in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3-ef886913-38dd-4a20-825d-5029b93d64b6 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2201240933050900u 1 2024-01-22 09:33:10 09:35:32 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,599] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-c8d8c2ec-52c5-47de-a087-7af3258e8ed3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:36,601] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-c8d8c2ec-52c5-47de-a087-7af3258e8ed3 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:57.040+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate started 09:35:32 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:37,257] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 864903c6-6b2d-49e1-b529-b1863a334e8b in Empty state. Created a new member id consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2-b62307a7-79c9-40ae-a084-8fa87fc4222b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:57.040+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=cd4be9ef-f8d3-4717-ae63-429516ff01bb, expireMs=1705916067038] 09:35:32 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:37,260] INFO [GroupCoordinator 1]: Preparing to rebalance group 864903c6-6b2d-49e1-b529-b1863a334e8b in state PreparingRebalance with old generation 0 (__consumer_offsets-38) (reason: Adding new member consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2-b62307a7-79c9-40ae-a084-8fa87fc4222b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:57.042+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:39,607] INFO [GroupCoordinator 1]: Stabilized group d445f4a2-e058-4282-8e5c-a34015c30918 generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","timestampMs":1705916037021,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:39,610] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:57.090+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:39,631] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-c8d8c2ec-52c5-47de-a087-7af3258e8ed3 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:39,631] INFO [GroupCoordinator 1]: Assignment received from leader consumer-d445f4a2-e058-4282-8e5c-a34015c30918-3-ef886913-38dd-4a20-825d-5029b93d64b6 for group d445f4a2-e058-4282-8e5c-a34015c30918 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","timestampMs":1705916037021,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2201240933051000u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:40,261] INFO [GroupCoordinator 1]: Stabilized group 864903c6-6b2d-49e1-b529-b1863a334e8b generation 1 (__consumer_offsets-38) with 1 members (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-pap | [2024-01-22T09:33:57.090+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2201240933051100u 1 2024-01-22 09:33:10 09:35:32 kafka | [2024-01-22 09:33:40,273] INFO [GroupCoordinator 1]: Assignment received from leader consumer-864903c6-6b2d-49e1-b529-b1863a334e8b-2-b62307a7-79c9-40ae-a084-8fa87fc4222b for group 864903c6-6b2d-49e1-b529-b1863a334e8b for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 09:35:32 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2201240933051200u 1 2024-01-22 09:33:10 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","timestampMs":1705916037021,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2201240933051200u 1 2024-01-22 09:33:11 09:35:32 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2201240933051200u 1 2024-01-22 09:33:11 09:35:32 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2201240933051200u 1 2024-01-22 09:33:11 09:35:32 policy-pap | [2024-01-22T09:33:57.090+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:35:32 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2201240933051300u 1 2024-01-22 09:33:11 09:35:32 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2201240933051300u 1 2024-01-22 09:33:11 09:35:32 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2201240933051300u 1 2024-01-22 09:33:11 09:35:32 policy-db-migrator | policyadmin: OK @ 1300 09:35:32 policy-pap | [2024-01-22T09:33:57.090+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:35:32 policy-pap | [2024-01-22T09:33:57.110+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f116a901-9c08-4bd7-b182-b5c55921c9f0","timestampMs":1705916037095,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-pap | [2024-01-22T09:33:57.111+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 09:35:32 policy-pap | [2024-01-22T09:33:57.111+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f116a901-9c08-4bd7-b182-b5c55921c9f0","timestampMs":1705916037095,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup"} 09:35:32 policy-pap | [2024-01-22T09:33:57.114+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3b36797-dede-40e6-a912-8f5d056ff824","timestampMs":1705916037096,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping 09:35:32 policy-pap | [2024-01-22T09:33:57.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping enqueue 09:35:32 policy-pap | [2024-01-22T09:33:57.133+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping timer 09:35:32 policy-pap | [2024-01-22T09:33:57.133+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=cd4be9ef-f8d3-4717-ae63-429516ff01bb, expireMs=1705916067038] 09:35:32 policy-pap | [2024-01-22T09:33:57.134+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping listener 09:35:32 policy-pap | [2024-01-22T09:33:57.134+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopped 09:35:32 policy-pap | [2024-01-22T09:33:57.138+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"cd4be9ef-f8d3-4717-ae63-429516ff01bb","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3b36797-dede-40e6-a912-8f5d056ff824","timestampMs":1705916037096,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.138+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cd4be9ef-f8d3-4717-ae63-429516ff01bb 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate successful 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 start publishing next request 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange starting 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange starting listener 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange starting timer 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=eac79578-dac3-4e88-8e87-f5481a4b7b6f, expireMs=1705916067139] 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange starting enqueue 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=eac79578-dac3-4e88-8e87-f5481a4b7b6f, expireMs=1705916067139] 09:35:32 policy-pap | [2024-01-22T09:33:57.139+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange started 09:35:32 policy-pap | [2024-01-22T09:33:57.140+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","timestampMs":1705916037022,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.149+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","timestampMs":1705916037022,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.150+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 09:35:32 policy-pap | [2024-01-22T09:33:57.164+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6fb83c35-d45e-49d3-92dc-50ca57031d47","timestampMs":1705916037153,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.165+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id eac79578-dac3-4e88-8e87-f5481a4b7b6f 09:35:32 policy-pap | [2024-01-22T09:33:57.173+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","timestampMs":1705916037022,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.173+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"eac79578-dac3-4e88-8e87-f5481a4b7b6f","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"6fb83c35-d45e-49d3-92dc-50ca57031d47","timestampMs":1705916037153,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange stopping 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange stopping enqueue 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange stopping timer 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=eac79578-dac3-4e88-8e87-f5481a4b7b6f, expireMs=1705916067139] 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange stopping listener 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange stopped 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpStateChange successful 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 start publishing next request 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting listener 09:35:32 policy-pap | [2024-01-22T09:33:57.176+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting timer 09:35:32 policy-pap | [2024-01-22T09:33:57.177+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=34b780a3-ced9-4779-b152-c1da3c59f2ff, expireMs=1705916067176] 09:35:32 policy-pap | [2024-01-22T09:33:57.177+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate starting enqueue 09:35:32 policy-pap | [2024-01-22T09:33:57.177+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate started 09:35:32 policy-pap | [2024-01-22T09:33:57.177+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"34b780a3-ced9-4779-b152-c1da3c59f2ff","timestampMs":1705916037166,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.185+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"34b780a3-ced9-4779-b152-c1da3c59f2ff","timestampMs":1705916037166,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.186+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 09:35:32 policy-pap | [2024-01-22T09:33:57.187+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"source":"pap-6f75c005-df26-4802-8354-240b5c126b56","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"34b780a3-ced9-4779-b152-c1da3c59f2ff","timestampMs":1705916037166,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.187+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 09:35:32 policy-pap | [2024-01-22T09:33:57.196+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 09:35:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"34b780a3-ced9-4779-b152-c1da3c59f2ff","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"fbceb791-c882-4044-a106-553d3943704f","timestampMs":1705916037187,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.197+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping 09:35:32 policy-pap | [2024-01-22T09:33:57.198+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping enqueue 09:35:32 policy-pap | [2024-01-22T09:33:57.198+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping timer 09:35:32 policy-pap | [2024-01-22T09:33:57.198+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=34b780a3-ced9-4779-b152-c1da3c59f2ff, expireMs=1705916067176] 09:35:32 policy-pap | [2024-01-22T09:33:57.198+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopping listener 09:35:32 policy-pap | [2024-01-22T09:33:57.198+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate stopped 09:35:32 policy-pap | [2024-01-22T09:33:57.199+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 09:35:32 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"34b780a3-ced9-4779-b152-c1da3c59f2ff","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"fbceb791-c882-4044-a106-553d3943704f","timestampMs":1705916037187,"name":"apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 09:35:32 policy-pap | [2024-01-22T09:33:57.199+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 34b780a3-ced9-4779-b152-c1da3c59f2ff 09:35:32 policy-pap | [2024-01-22T09:33:57.203+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 PdpUpdate successful 09:35:32 policy-pap | [2024-01-22T09:33:57.203+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-bffdbd86-b49a-4042-bd06-c6a38b0c02e5 has no more requests 09:35:32 policy-pap | [2024-01-22T09:34:02.243+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 09:35:32 policy-pap | [2024-01-22T09:34:02.251+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 09:35:32 policy-pap | [2024-01-22T09:34:02.635+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:03.198+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:03.199+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:03.695+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:03.934+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 09:35:32 policy-pap | [2024-01-22T09:34:04.043+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 09:35:32 policy-pap | [2024-01-22T09:34:04.043+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:04.043+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:04.055+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-22T09:34:03Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-22T09:34:04Z, user=policyadmin)] 09:35:32 policy-pap | [2024-01-22T09:34:04.773+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:04.774+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 09:35:32 policy-pap | [2024-01-22T09:34:04.774+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 09:35:32 policy-pap | [2024-01-22T09:34:04.774+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:04.774+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:04.784+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-22T09:34:04Z, user=policyadmin)] 09:35:32 policy-pap | [2024-01-22T09:34:05.139+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 09:35:32 policy-pap | [2024-01-22T09:34:05.139+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:05.139+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 09:35:32 policy-pap | [2024-01-22T09:34:05.139+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 09:35:32 policy-pap | [2024-01-22T09:34:05.139+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:05.139+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:05.150+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-22T09:34:05Z, user=policyadmin)] 09:35:32 policy-pap | [2024-01-22T09:34:25.725+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:25.729+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 09:35:32 policy-pap | [2024-01-22T09:34:27.039+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=cd4be9ef-f8d3-4717-ae63-429516ff01bb, expireMs=1705916067038] 09:35:32 policy-pap | [2024-01-22T09:34:27.139+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=eac79578-dac3-4e88-8e87-f5481a4b7b6f, expireMs=1705916067139] 09:35:32 ++ echo 'Tearing down containers...' 09:35:32 Tearing down containers... 09:35:32 ++ docker-compose down -v --remove-orphans 09:35:33 Stopping grafana ... 09:35:33 Stopping policy-apex-pdp ... 09:35:33 Stopping policy-pap ... 09:35:33 Stopping policy-api ... 09:35:33 Stopping kafka ... 09:35:33 Stopping mariadb ... 09:35:33 Stopping compose_zookeeper_1 ... 09:35:33 Stopping simulator ... 09:35:33 Stopping prometheus ... 09:35:33 Stopping grafana ... done 09:35:33 Stopping prometheus ... done 09:35:43 Stopping policy-apex-pdp ... done 09:35:53 Stopping simulator ... done 09:35:54 Stopping policy-pap ... done 09:35:54 Stopping mariadb ... done 09:35:54 Stopping kafka ... done 09:35:55 Stopping compose_zookeeper_1 ... done 09:36:04 Stopping policy-api ... done 09:36:04 Removing grafana ... 09:36:04 Removing policy-apex-pdp ... 09:36:04 Removing policy-pap ... 09:36:04 Removing policy-api ... 09:36:04 Removing policy-db-migrator ... 09:36:04 Removing kafka ... 09:36:04 Removing mariadb ... 09:36:04 Removing compose_zookeeper_1 ... 09:36:04 Removing simulator ... 09:36:04 Removing prometheus ... 09:36:04 Removing simulator ... done 09:36:04 Removing grafana ... done 09:36:04 Removing policy-db-migrator ... done 09:36:04 Removing policy-pap ... done 09:36:04 Removing mariadb ... done 09:36:04 Removing prometheus ... done 09:36:04 Removing policy-api ... done 09:36:04 Removing policy-apex-pdp ... done 09:36:04 Removing compose_zookeeper_1 ... done 09:36:04 Removing kafka ... done 09:36:04 Removing network compose_default 09:36:04 ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap 09:36:04 + load_set 09:36:04 + _setopts=hxB 09:36:04 ++ echo braceexpand:hashall:interactive-comments:xtrace 09:36:04 ++ tr : ' ' 09:36:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:36:04 + set +o braceexpand 09:36:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:36:04 + set +o hashall 09:36:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:36:04 + set +o interactive-comments 09:36:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 09:36:04 + set +o xtrace 09:36:04 ++ echo hxB 09:36:04 ++ sed 's/./& /g' 09:36:04 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:36:04 + set +h 09:36:04 + for i in $(echo "$_setopts" | sed 's/./& /g') 09:36:04 + set +x 09:36:04 + [[ -n /tmp/tmp.wqeCzjSW34 ]] 09:36:04 + rsync -av /tmp/tmp.wqeCzjSW34/ /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap 09:36:04 sending incremental file list 09:36:04 ./ 09:36:04 log.html 09:36:04 output.xml 09:36:04 report.html 09:36:04 testplan.txt 09:36:04 09:36:04 sent 910,488 bytes received 95 bytes 1,821,166.00 bytes/sec 09:36:04 total size is 909,943 speedup is 1.00 09:36:04 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models 09:36:04 + exit 1 09:36:04 Build step 'Execute shell' marked build as failure 09:36:04 $ ssh-agent -k 09:36:04 unset SSH_AUTH_SOCK; 09:36:04 unset SSH_AGENT_PID; 09:36:04 echo Agent pid 2081 killed; 09:36:04 [ssh-agent] Stopped. 09:36:04 Robot results publisher started... 09:36:04 -Parsing output xml: 09:36:05 Done! 09:36:05 WARNING! Could not find file: **/log.html 09:36:05 WARNING! Could not find file: **/report.html 09:36:05 -Copying log files to build dir: 09:36:05 Done! 09:36:05 -Assigning results to build: 09:36:05 Done! 09:36:05 -Checking thresholds: 09:36:05 Done! 09:36:05 Done publishing Robot results. 09:36:05 [PostBuildScript] - [INFO] Executing post build scripts. 09:36:05 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins17396130572049975577.sh 09:36:05 ---> sysstat.sh 09:36:05 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins2573541465147745287.sh 09:36:05 ---> package-listing.sh 09:36:05 ++ facter osfamily 09:36:05 ++ tr '[:upper:]' '[:lower:]' 09:36:06 + OS_FAMILY=debian 09:36:06 + workspace=/w/workspace/policy-pap-master-project-csit-verify-pap 09:36:06 + START_PACKAGES=/tmp/packages_start.txt 09:36:06 + END_PACKAGES=/tmp/packages_end.txt 09:36:06 + DIFF_PACKAGES=/tmp/packages_diff.txt 09:36:06 + PACKAGES=/tmp/packages_start.txt 09:36:06 + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' 09:36:06 + PACKAGES=/tmp/packages_end.txt 09:36:06 + case "${OS_FAMILY}" in 09:36:06 + dpkg -l 09:36:06 + grep '^ii' 09:36:06 + '[' -f /tmp/packages_start.txt ']' 09:36:06 + '[' -f /tmp/packages_end.txt ']' 09:36:06 + diff /tmp/packages_start.txt /tmp/packages_end.txt 09:36:06 + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' 09:36:06 + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ 09:36:06 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ 09:36:06 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins17058368828960616761.sh 09:36:06 ---> capture-instance-metadata.sh 09:36:06 Setup pyenv: 09:36:06 system 09:36:06 3.8.13 09:36:06 3.9.13 09:36:06 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 09:36:06 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-SZ0L from file:/tmp/.os_lf_venv 09:36:07 lf-activate-venv(): INFO: Installing: lftools 09:36:17 lf-activate-venv(): INFO: Adding /tmp/venv-SZ0L/bin to PATH 09:36:17 INFO: Running in OpenStack, capturing instance metadata 09:36:18 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins7299384295114300789.sh 09:36:18 provisioning config files... 09:36:18 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/config3852179193338363485tmp 09:36:18 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 09:36:18 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 09:36:18 [EnvInject] - Injecting environment variables from a build step. 09:36:18 [EnvInject] - Injecting as environment variables the properties content 09:36:18 SERVER_ID=logs 09:36:18 09:36:18 [EnvInject] - Variables injected successfully. 09:36:18 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins10918903503492124007.sh 09:36:18 ---> create-netrc.sh 09:36:18 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins3825597360536635700.sh 09:36:18 ---> python-tools-install.sh 09:36:18 Setup pyenv: 09:36:18 system 09:36:18 3.8.13 09:36:18 3.9.13 09:36:18 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 09:36:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-SZ0L from file:/tmp/.os_lf_venv 09:36:19 lf-activate-venv(): INFO: Installing: lftools 09:36:26 lf-activate-venv(): INFO: Adding /tmp/venv-SZ0L/bin to PATH 09:36:26 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins12421063949511988779.sh 09:36:26 ---> sudo-logs.sh 09:36:26 Archiving 'sudo' log.. 09:36:27 [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins1170274232308496218.sh 09:36:27 ---> job-cost.sh 09:36:27 Setup pyenv: 09:36:27 system 09:36:27 3.8.13 09:36:27 3.9.13 09:36:27 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 09:36:27 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-SZ0L from file:/tmp/.os_lf_venv 09:36:28 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 09:36:35 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 09:36:35 lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. 09:36:35 lf-activate-venv(): INFO: Adding /tmp/venv-SZ0L/bin to PATH 09:36:35 INFO: No Stack... 09:36:35 INFO: Retrieving Pricing Info for: v3-standard-8 09:36:36 INFO: Archiving Costs 09:36:36 [policy-pap-master-project-csit-verify-pap] $ /bin/bash -l /tmp/jenkins12396685013761466407.sh 09:36:36 ---> logs-deploy.sh 09:36:36 Setup pyenv: 09:36:36 system 09:36:36 3.8.13 09:36:36 3.9.13 09:36:36 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) 09:36:36 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-SZ0L from file:/tmp/.os_lf_venv 09:36:37 lf-activate-venv(): INFO: Installing: lftools 09:36:46 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 09:36:46 python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. 09:36:46 lf-activate-venv(): INFO: Adding /tmp/venv-SZ0L/bin to PATH 09:36:46 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-verify-pap/502 09:36:46 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 09:36:47 Archives upload complete. 09:36:47 INFO: archiving logs to Nexus 09:36:48 ---> uname -a: 09:36:48 Linux prd-ubuntu1804-docker-8c-8g-14120 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 09:36:48 09:36:48 09:36:48 ---> lscpu: 09:36:48 Architecture: x86_64 09:36:48 CPU op-mode(s): 32-bit, 64-bit 09:36:48 Byte Order: Little Endian 09:36:48 CPU(s): 8 09:36:48 On-line CPU(s) list: 0-7 09:36:48 Thread(s) per core: 1 09:36:48 Core(s) per socket: 1 09:36:48 Socket(s): 8 09:36:48 NUMA node(s): 1 09:36:48 Vendor ID: AuthenticAMD 09:36:48 CPU family: 23 09:36:48 Model: 49 09:36:48 Model name: AMD EPYC-Rome Processor 09:36:48 Stepping: 0 09:36:48 CPU MHz: 2799.998 09:36:48 BogoMIPS: 5599.99 09:36:48 Virtualization: AMD-V 09:36:48 Hypervisor vendor: KVM 09:36:48 Virtualization type: full 09:36:48 L1d cache: 32K 09:36:48 L1i cache: 32K 09:36:48 L2 cache: 512K 09:36:48 L3 cache: 16384K 09:36:48 NUMA node0 CPU(s): 0-7 09:36:48 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 09:36:48 09:36:48 09:36:48 ---> nproc: 09:36:48 8 09:36:48 09:36:48 09:36:48 ---> df -h: 09:36:48 Filesystem Size Used Avail Use% Mounted on 09:36:48 udev 16G 0 16G 0% /dev 09:36:48 tmpfs 3.2G 708K 3.2G 1% /run 09:36:48 /dev/vda1 155G 15G 141G 10% / 09:36:48 tmpfs 16G 0 16G 0% /dev/shm 09:36:48 tmpfs 5.0M 0 5.0M 0% /run/lock 09:36:48 tmpfs 16G 0 16G 0% /sys/fs/cgroup 09:36:48 /dev/vda15 105M 4.4M 100M 5% /boot/efi 09:36:48 tmpfs 3.2G 0 3.2G 0% /run/user/1001 09:36:48 09:36:48 09:36:48 ---> free -m: 09:36:48 total used free shared buff/cache available 09:36:48 Mem: 32167 838 24646 0 6681 30872 09:36:48 Swap: 1023 0 1023 09:36:48 09:36:48 09:36:48 ---> ip addr: 09:36:48 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 09:36:48 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 09:36:48 inet 127.0.0.1/8 scope host lo 09:36:48 valid_lft forever preferred_lft forever 09:36:48 inet6 ::1/128 scope host 09:36:48 valid_lft forever preferred_lft forever 09:36:48 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 09:36:48 link/ether fa:16:3e:ed:a0:a5 brd ff:ff:ff:ff:ff:ff 09:36:48 inet 10.30.107.62/23 brd 10.30.107.255 scope global dynamic ens3 09:36:48 valid_lft 85952sec preferred_lft 85952sec 09:36:48 inet6 fe80::f816:3eff:feed:a0a5/64 scope link 09:36:48 valid_lft forever preferred_lft forever 09:36:48 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 09:36:48 link/ether 02:42:71:7a:86:38 brd ff:ff:ff:ff:ff:ff 09:36:48 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 09:36:48 valid_lft forever preferred_lft forever 09:36:48 09:36:48 09:36:48 ---> sar -b -r -n DEV: 09:36:48 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14120) 01/22/24 _x86_64_ (8 CPU) 09:36:48 09:36:48 09:29:23 LINUX RESTART (8 CPU) 09:36:48 09:36:48 09:30:01 tps rtps wtps bread/s bwrtn/s 09:36:48 09:31:01 114.52 17.70 96.82 1020.92 62250.90 09:36:48 09:32:01 155.62 23.05 132.58 2757.81 69716.78 09:36:48 09:33:01 350.13 5.53 344.60 512.43 149601.85 09:36:48 09:34:01 222.13 6.70 215.43 301.43 9202.35 09:36:48 09:35:01 4.07 0.00 4.07 0.00 91.80 09:36:48 09:36:01 46.36 0.07 46.29 9.07 1829.18 09:36:48 Average: 148.80 8.84 139.96 766.96 48779.37 09:36:48 09:36:48 09:30:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 09:36:48 09:31:01 30065660 31677380 2873560 8.72 69600 1851772 1492724 4.39 891460 1687260 174476 09:36:48 09:32:01 29332360 31699124 3606860 10.95 91852 2560252 1563368 4.60 957940 2301880 525696 09:36:48 09:33:01 24846280 31234904 8092940 24.57 145776 6344664 6735980 19.82 1498532 6001520 564 09:36:48 09:34:01 22992668 29537460 9946552 30.20 158424 6477516 8840076 26.01 3323188 5997232 320 09:36:48 09:35:01 22912608 29457848 10026612 30.44 158632 6477724 8878668 26.12 3405872 5994936 240 09:36:48 09:36:01 24478580 31041924 8460640 25.69 159568 6504864 2426848 7.14 1878092 6006564 244 09:36:48 Average: 25771359 30774773 7167861 21.76 130642 5036132 4989611 14.68 1992514 4664899 116923 09:36:48 09:36:48 09:30:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:36:48 09:31:01 ens3 59.79 42.42 998.61 8.21 0.00 0.00 0.00 0.00 09:36:48 09:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:31:01 lo 1.33 1.33 0.14 0.14 0.00 0.00 0.00 0.00 09:36:48 09:32:01 ens3 186.52 131.54 4234.64 14.29 0.00 0.00 0.00 0.00 09:36:48 09:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:32:01 lo 6.40 6.40 0.60 0.60 0.00 0.00 0.00 0.00 09:36:48 09:32:01 br-93b2f0259965 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:33:01 ens3 1062.40 611.20 31173.74 44.06 0.00 0.00 0.00 0.00 09:36:48 09:33:01 veth29ff900 0.00 0.15 0.00 0.01 0.00 0.00 0.00 0.00 09:36:48 09:33:01 vethae6fc56 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:34:01 ens3 5.50 4.12 1.52 1.38 0.00 0.00 0.00 0.00 09:36:48 09:34:01 veth29ff900 14.48 13.36 1.92 1.92 0.00 0.00 0.00 0.00 09:36:48 09:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:34:01 vethf9e0bcc 0.00 0.35 0.00 0.02 0.00 0.00 0.00 0.00 09:36:48 09:35:01 ens3 3.13 3.32 0.65 0.80 0.00 0.00 0.00 0.00 09:36:48 09:35:01 veth29ff900 13.83 9.32 1.05 1.34 0.00 0.00 0.00 0.00 09:36:48 09:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:35:01 vethf9e0bcc 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:36:01 ens3 16.31 15.75 6.60 16.54 0.00 0.00 0.00 0.00 09:36:48 09:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 09:36:01 vethdef7ed2 54.12 48.23 20.48 40.49 0.00 0.00 0.00 0.00 09:36:48 09:36:01 lo 34.94 34.94 6.20 6.20 0.00 0.00 0.00 0.00 09:36:48 Average: ens3 222.23 134.70 6068.01 14.21 0.00 0.00 0.00 0.00 09:36:48 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:36:48 Average: vethdef7ed2 9.02 8.04 3.41 6.75 0.00 0.00 0.00 0.00 09:36:48 Average: lo 5.16 5.16 0.98 0.98 0.00 0.00 0.00 0.00 09:36:48 09:36:48 09:36:48 ---> sar -P ALL: 09:36:48 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-14120) 01/22/24 _x86_64_ (8 CPU) 09:36:48 09:36:48 09:29:23 LINUX RESTART (8 CPU) 09:36:48 09:36:48 09:30:01 CPU %user %nice %system %iowait %steal %idle 09:36:48 09:31:01 all 8.46 0.00 0.59 5.91 0.03 85.01 09:36:48 09:31:01 0 0.92 0.00 0.17 0.35 0.02 98.55 09:36:48 09:31:01 1 14.58 0.00 0.96 3.44 0.03 81.00 09:36:48 09:31:01 2 18.55 0.00 1.07 2.64 0.03 77.71 09:36:48 09:31:01 3 10.35 0.00 0.54 33.73 0.05 55.33 09:36:48 09:31:01 4 5.95 0.00 0.45 0.12 0.02 93.46 09:36:48 09:31:01 5 2.58 0.00 0.27 4.17 0.02 92.96 09:36:48 09:31:01 6 13.91 0.00 1.02 2.94 0.07 82.06 09:36:48 09:31:01 7 1.00 0.00 0.17 0.00 0.00 98.83 09:36:48 09:32:01 all 9.90 0.00 1.32 4.40 0.03 84.35 09:36:48 09:32:01 0 7.38 0.00 1.25 0.72 0.02 90.64 09:36:48 09:32:01 1 28.62 0.00 2.06 3.28 0.05 65.98 09:36:48 09:32:01 2 6.94 0.00 1.09 0.08 0.02 91.87 09:36:48 09:32:01 3 10.78 0.00 1.17 0.37 0.03 87.65 09:36:48 09:32:01 4 1.33 0.00 1.00 1.97 0.02 95.68 09:36:48 09:32:01 5 1.41 0.00 0.87 24.75 0.02 72.95 09:36:48 09:32:01 6 5.75 0.00 1.47 2.01 0.07 90.71 09:36:48 09:32:01 7 17.03 0.00 1.67 2.04 0.03 79.22 09:36:48 09:33:01 all 15.61 0.00 6.08 4.21 0.08 74.02 09:36:48 09:33:01 0 16.98 0.00 5.76 1.07 0.08 76.10 09:36:48 09:33:01 1 15.08 0.00 6.80 1.32 0.07 76.73 09:36:48 09:33:01 2 14.78 0.00 6.42 1.32 0.08 77.39 09:36:48 09:33:01 3 20.45 0.00 5.71 0.29 0.07 73.48 09:36:48 09:33:01 4 13.64 0.00 5.36 8.45 0.09 72.47 09:36:48 09:33:01 5 12.19 0.00 6.57 20.90 0.10 60.23 09:36:48 09:33:01 6 17.58 0.00 4.70 0.29 0.08 77.34 09:36:48 09:33:01 7 14.09 0.00 7.33 0.14 0.08 78.36 09:36:48 09:34:01 all 25.93 0.00 2.91 0.83 0.10 70.24 09:36:48 09:34:01 0 26.31 0.00 2.93 0.42 0.10 70.24 09:36:48 09:34:01 1 31.13 0.00 3.48 0.72 0.12 64.55 09:36:48 09:34:01 2 29.53 0.00 3.11 0.05 0.12 67.19 09:36:48 09:34:01 3 23.13 0.00 2.53 0.39 0.10 73.85 09:36:48 09:34:01 4 28.23 0.00 3.22 0.27 0.08 68.20 09:36:48 09:34:01 5 18.40 0.00 2.41 0.91 0.10 78.18 09:36:48 09:34:01 6 26.01 0.00 2.81 0.69 0.10 70.39 09:36:48 09:34:01 7 24.64 0.00 2.76 3.18 0.12 69.30 09:36:48 09:35:01 all 3.53 0.00 0.36 0.12 0.04 95.97 09:36:48 09:35:01 0 2.38 0.00 0.20 0.00 0.02 97.40 09:36:48 09:35:01 1 3.22 0.00 0.40 0.02 0.07 96.29 09:36:48 09:35:01 2 3.05 0.00 0.20 0.00 0.00 96.75 09:36:48 09:35:01 3 3.36 0.00 0.35 0.00 0.03 96.26 09:36:48 09:35:01 4 3.84 0.00 0.35 0.00 0.03 95.77 09:36:48 09:35:01 5 5.19 0.00 0.62 0.88 0.05 93.25 09:36:48 09:35:01 6 3.19 0.00 0.30 0.02 0.02 96.48 09:36:48 09:35:01 7 4.00 0.00 0.42 0.02 0.03 95.54 09:36:48 09:36:01 all 1.32 0.00 0.48 0.09 0.04 98.06 09:36:48 09:36:01 0 0.73 0.00 0.47 0.25 0.02 98.53 09:36:48 09:36:01 1 0.80 0.00 0.40 0.13 0.03 98.63 09:36:48 09:36:01 2 1.82 0.00 0.45 0.07 0.07 97.59 09:36:48 09:36:01 3 0.89 0.00 0.52 0.07 0.03 98.49 09:36:48 09:36:01 4 0.94 0.00 0.42 0.03 0.05 98.56 09:36:48 09:36:01 5 2.47 0.00 0.60 0.13 0.05 96.74 09:36:48 09:36:01 6 0.82 0.00 0.40 0.00 0.03 98.75 09:36:48 09:36:01 7 2.12 0.00 0.60 0.07 0.05 97.16 09:36:48 Average: all 10.77 0.00 1.94 2.59 0.05 84.64 09:36:48 Average: 0 9.08 0.00 1.78 0.47 0.04 88.63 09:36:48 Average: 1 15.56 0.00 2.33 1.48 0.06 80.56 09:36:48 Average: 2 12.43 0.00 2.05 0.69 0.05 84.78 09:36:48 Average: 3 11.48 0.00 1.80 5.81 0.05 80.86 09:36:48 Average: 4 8.96 0.00 1.79 1.79 0.05 87.42 09:36:48 Average: 5 7.02 0.00 1.87 8.57 0.06 82.48 09:36:48 Average: 6 11.19 0.00 1.78 0.99 0.06 85.98 09:36:48 Average: 7 10.46 0.00 2.14 0.91 0.05 86.44 09:36:48 09:36:48 09:36:48