23:10:58 Started by timer 23:10:58 Running as SYSTEM 23:10:58 [EnvInject] - Loading node environment variables. 23:10:59 Building remotely on prd-ubuntu1804-docker-8c-8g-8694 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:59 [ssh-agent] Looking for ssh-agent implementation... 23:10:59 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:59 $ ssh-agent 23:10:59 SSH_AUTH_SOCK=/tmp/ssh-vDNGYFMnrIpo/agent.2140 23:10:59 SSH_AGENT_PID=2142 23:10:59 [ssh-agent] Started. 23:10:59 Running ssh-add (command line suppressed) 23:10:59 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_11347858321556987045.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_11347858321556987045.key) 23:10:59 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:59 The recommended git tool is: NONE 23:11:00 using credential onap-jenkins-ssh 23:11:00 Wiping out workspace first. 23:11:00 Cloning the remote Git repository 23:11:00 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:00 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:01 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:01 > git --version # timeout=10 23:11:01 > git --version # 'git version 2.17.1' 23:11:01 using GIT_SSH to set credentials Gerrit user 23:11:01 Verifying host key using manually-configured host key entries 23:11:01 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:01 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:01 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:01 Avoid second fetch 23:11:01 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:01 Checking out Revision 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 (refs/remotes/origin/master) 23:11:01 > git config core.sparsecheckout # timeout=10 23:11:01 > git checkout -f 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 # timeout=30 23:11:02 Commit message: "Fix config files removing hibernate deprecated properties and changing robot deprecated commands in test files" 23:11:02 > git rev-list --no-walk 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 # timeout=10 23:11:02 provisioning config files... 23:11:02 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:02 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:02 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10884642917397392476.sh 23:11:02 ---> python-tools-install.sh 23:11:02 Setup pyenv: 23:11:02 * system (set by /opt/pyenv/version) 23:11:02 * 3.8.13 (set by /opt/pyenv/version) 23:11:02 * 3.9.13 (set by /opt/pyenv/version) 23:11:02 * 3.10.6 (set by /opt/pyenv/version) 23:11:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-NbUn 23:11:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:09 lf-activate-venv(): INFO: Installing: lftools 23:11:43 lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH 23:11:43 Generating Requirements File 23:12:12 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:12:12 lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. 23:12:13 Python 3.10.6 23:12:13 pip 24.0 from /tmp/venv-NbUn/lib/python3.10/site-packages/pip (python 3.10) 23:12:14 appdirs==1.4.4 23:12:14 argcomplete==3.2.2 23:12:14 aspy.yaml==1.3.0 23:12:14 attrs==23.2.0 23:12:14 autopage==0.5.2 23:12:14 beautifulsoup4==4.12.3 23:12:14 boto3==1.34.49 23:12:14 botocore==1.34.49 23:12:14 bs4==0.0.2 23:12:14 cachetools==5.3.2 23:12:14 certifi==2024.2.2 23:12:14 cffi==1.16.0 23:12:14 cfgv==3.4.0 23:12:14 chardet==5.2.0 23:12:14 charset-normalizer==3.3.2 23:12:14 click==8.1.7 23:12:14 cliff==4.6.0 23:12:14 cmd2==2.4.3 23:12:14 cryptography==3.3.2 23:12:14 debtcollector==3.0.0 23:12:14 decorator==5.1.1 23:12:14 defusedxml==0.7.1 23:12:14 Deprecated==1.2.14 23:12:14 distlib==0.3.8 23:12:14 dnspython==2.6.1 23:12:14 docker==4.2.2 23:12:14 dogpile.cache==1.3.2 23:12:14 email-validator==2.1.0.post1 23:12:14 filelock==3.13.1 23:12:14 future==1.0.0 23:12:14 gitdb==4.0.11 23:12:14 GitPython==3.1.42 23:12:14 google-auth==2.28.1 23:12:14 httplib2==0.22.0 23:12:14 identify==2.5.35 23:12:14 idna==3.6 23:12:14 importlib-resources==1.5.0 23:12:14 iso8601==2.1.0 23:12:14 Jinja2==3.1.3 23:12:14 jmespath==1.0.1 23:12:14 jsonpatch==1.33 23:12:14 jsonpointer==2.4 23:12:14 jsonschema==4.21.1 23:12:14 jsonschema-specifications==2023.12.1 23:12:14 keystoneauth1==5.6.0 23:12:14 kubernetes==29.0.0 23:12:14 lftools==0.37.9 23:12:14 lxml==5.1.0 23:12:14 MarkupSafe==2.1.5 23:12:14 msgpack==1.0.7 23:12:14 multi_key_dict==2.0.3 23:12:14 munch==4.0.0 23:12:14 netaddr==1.2.1 23:12:14 netifaces==0.11.0 23:12:14 niet==1.4.2 23:12:14 nodeenv==1.8.0 23:12:14 oauth2client==4.1.3 23:12:14 oauthlib==3.2.2 23:12:14 openstacksdk==0.62.0 23:12:14 os-client-config==2.1.0 23:12:14 os-service-types==1.7.0 23:12:14 osc-lib==3.0.1 23:12:14 oslo.config==9.4.0 23:12:14 oslo.context==5.4.0 23:12:14 oslo.i18n==6.3.0 23:12:14 oslo.log==5.5.0 23:12:14 oslo.serialization==5.4.0 23:12:14 oslo.utils==7.1.0 23:12:14 packaging==23.2 23:12:14 pbr==6.0.0 23:12:14 platformdirs==4.2.0 23:12:14 prettytable==3.10.0 23:12:14 pyasn1==0.5.1 23:12:14 pyasn1-modules==0.3.0 23:12:14 pycparser==2.21 23:12:14 pygerrit2==2.0.15 23:12:14 PyGithub==2.2.0 23:12:14 pyinotify==0.9.6 23:12:14 PyJWT==2.8.0 23:12:14 PyNaCl==1.5.0 23:12:14 pyparsing==2.4.7 23:12:14 pyperclip==1.8.2 23:12:14 pyrsistent==0.20.0 23:12:14 python-cinderclient==9.4.0 23:12:14 python-dateutil==2.8.2 23:12:14 python-heatclient==3.4.0 23:12:14 python-jenkins==1.8.2 23:12:14 python-keystoneclient==5.3.0 23:12:14 python-magnumclient==4.3.0 23:12:14 python-novaclient==18.4.0 23:12:14 python-openstackclient==6.0.1 23:12:14 python-swiftclient==4.4.0 23:12:14 PyYAML==6.0.1 23:12:14 referencing==0.33.0 23:12:14 requests==2.31.0 23:12:14 requests-oauthlib==1.3.1 23:12:14 requestsexceptions==1.4.0 23:12:14 rfc3986==2.0.0 23:12:14 rpds-py==0.18.0 23:12:14 rsa==4.9 23:12:14 ruamel.yaml==0.18.6 23:12:14 ruamel.yaml.clib==0.2.8 23:12:14 s3transfer==0.10.0 23:12:14 simplejson==3.19.2 23:12:14 six==1.16.0 23:12:14 smmap==5.0.1 23:12:14 soupsieve==2.5 23:12:14 stevedore==5.2.0 23:12:14 tabulate==0.9.0 23:12:14 toml==0.10.2 23:12:14 tomlkit==0.12.3 23:12:14 tqdm==4.66.2 23:12:14 typing_extensions==4.10.0 23:12:14 tzdata==2024.1 23:12:14 urllib3==1.26.18 23:12:14 virtualenv==20.25.1 23:12:14 wcwidth==0.2.13 23:12:14 websocket-client==1.7.0 23:12:14 wrapt==1.16.0 23:12:14 xdg==6.0.0 23:12:14 xmltodict==0.13.0 23:12:14 yq==3.2.3 23:12:14 [EnvInject] - Injecting environment variables from a build step. 23:12:14 [EnvInject] - Injecting as environment variables the properties content 23:12:14 SET_JDK_VERSION=openjdk17 23:12:14 GIT_URL="git://cloud.onap.org/mirror" 23:12:14 23:12:14 [EnvInject] - Variables injected successfully. 23:12:14 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins5787392273589121354.sh 23:12:14 ---> update-java-alternatives.sh 23:12:14 ---> Updating Java version 23:12:14 ---> Ubuntu/Debian system detected 23:12:14 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:14 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:14 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:14 openjdk version "17.0.4" 2022-07-19 23:12:14 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:14 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:14 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:14 [EnvInject] - Injecting environment variables from a build step. 23:12:14 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:14 [EnvInject] - Variables injected successfully. 23:12:14 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins2761987451037863148.sh 23:12:14 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:14 + set +u 23:12:14 + save_set 23:12:14 + RUN_CSIT_SAVE_SET=ehxB 23:12:14 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:14 + '[' 1 -eq 0 ']' 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + export ROBOT_VARIABLES= 23:12:14 + ROBOT_VARIABLES= 23:12:14 + export PROJECT=pap 23:12:14 + PROJECT=pap 23:12:14 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:14 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:14 + relax_set 23:12:14 + set +e 23:12:14 + set +o pipefail 23:12:14 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 +++ mktemp -d 23:12:14 ++ ROBOT_VENV=/tmp/tmp.Wga7XYV1uF 23:12:14 ++ echo ROBOT_VENV=/tmp/tmp.Wga7XYV1uF 23:12:14 +++ python3 --version 23:12:14 ++ echo 'Python version is: Python 3.6.9' 23:12:14 Python version is: Python 3.6.9 23:12:14 ++ python3 -m venv --clear /tmp/tmp.Wga7XYV1uF 23:12:16 ++ source /tmp/tmp.Wga7XYV1uF/bin/activate 23:12:16 +++ deactivate nondestructive 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ '[' -n /bin/bash -o -n '' ']' 23:12:16 +++ hash -r 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ unset VIRTUAL_ENV 23:12:16 +++ '[' '!' nondestructive = nondestructive ']' 23:12:16 +++ VIRTUAL_ENV=/tmp/tmp.Wga7XYV1uF 23:12:16 +++ export VIRTUAL_ENV 23:12:16 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:16 +++ PATH=/tmp/tmp.Wga7XYV1uF/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:16 +++ export PATH 23:12:16 +++ '[' -n '' ']' 23:12:16 +++ '[' -z '' ']' 23:12:16 +++ _OLD_VIRTUAL_PS1= 23:12:16 +++ '[' 'x(tmp.Wga7XYV1uF) ' '!=' x ']' 23:12:16 +++ PS1='(tmp.Wga7XYV1uF) ' 23:12:16 +++ export PS1 23:12:16 +++ '[' -n /bin/bash -o -n '' ']' 23:12:16 +++ hash -r 23:12:16 ++ set -exu 23:12:16 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:19 ++ echo 'Installing Python Requirements' 23:12:19 Installing Python Requirements 23:12:19 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:37 ++ python3 -m pip -qq freeze 23:12:38 bcrypt==4.0.1 23:12:38 beautifulsoup4==4.12.3 23:12:38 bitarray==2.9.2 23:12:38 certifi==2024.2.2 23:12:38 cffi==1.15.1 23:12:38 charset-normalizer==2.0.12 23:12:38 cryptography==40.0.2 23:12:38 decorator==5.1.1 23:12:38 elasticsearch==7.17.9 23:12:38 elasticsearch-dsl==7.4.1 23:12:38 enum34==1.1.10 23:12:38 idna==3.6 23:12:38 importlib-resources==5.4.0 23:12:38 ipaddr==2.2.0 23:12:38 isodate==0.6.1 23:12:38 jmespath==0.10.0 23:12:38 jsonpatch==1.32 23:12:38 jsonpath-rw==1.4.0 23:12:38 jsonpointer==2.3 23:12:38 lxml==5.1.0 23:12:38 netaddr==0.8.0 23:12:38 netifaces==0.11.0 23:12:38 odltools==0.1.28 23:12:38 paramiko==3.4.0 23:12:38 pkg_resources==0.0.0 23:12:38 ply==3.11 23:12:38 pyang==2.6.0 23:12:38 pyangbind==0.8.1 23:12:38 pycparser==2.21 23:12:38 pyhocon==0.3.60 23:12:38 PyNaCl==1.5.0 23:12:38 pyparsing==3.1.1 23:12:38 python-dateutil==2.8.2 23:12:38 regex==2023.8.8 23:12:38 requests==2.27.1 23:12:38 robotframework==6.1.1 23:12:38 robotframework-httplibrary==0.4.2 23:12:38 robotframework-pythonlibcore==3.0.0 23:12:38 robotframework-requests==0.9.4 23:12:38 robotframework-selenium2library==3.0.0 23:12:38 robotframework-seleniumlibrary==5.1.3 23:12:38 robotframework-sshlibrary==3.8.0 23:12:38 scapy==2.5.0 23:12:38 scp==0.14.5 23:12:38 selenium==3.141.0 23:12:38 six==1.16.0 23:12:38 soupsieve==2.3.2.post1 23:12:38 urllib3==1.26.18 23:12:38 waitress==2.0.0 23:12:38 WebOb==1.8.7 23:12:38 WebTest==3.0.0 23:12:38 zipp==3.6.0 23:12:38 ++ mkdir -p /tmp/tmp.Wga7XYV1uF/src/onap 23:12:38 ++ rm -rf /tmp/tmp.Wga7XYV1uF/src/onap/testsuite 23:12:38 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:44 ++ echo 'Installing python confluent-kafka library' 23:12:44 Installing python confluent-kafka library 23:12:44 ++ python3 -m pip install -qq confluent-kafka 23:12:45 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:45 Uninstall docker-py and reinstall docker. 23:12:45 ++ python3 -m pip uninstall -y -qq docker 23:12:45 ++ python3 -m pip install -U -qq docker 23:12:47 ++ python3 -m pip -qq freeze 23:12:47 bcrypt==4.0.1 23:12:47 beautifulsoup4==4.12.3 23:12:47 bitarray==2.9.2 23:12:47 certifi==2024.2.2 23:12:47 cffi==1.15.1 23:12:47 charset-normalizer==2.0.12 23:12:47 confluent-kafka==2.3.0 23:12:47 cryptography==40.0.2 23:12:47 decorator==5.1.1 23:12:47 deepdiff==5.7.0 23:12:47 dnspython==2.2.1 23:12:47 docker==5.0.3 23:12:47 elasticsearch==7.17.9 23:12:47 elasticsearch-dsl==7.4.1 23:12:47 enum34==1.1.10 23:12:47 future==1.0.0 23:12:47 idna==3.6 23:12:47 importlib-resources==5.4.0 23:12:47 ipaddr==2.2.0 23:12:47 isodate==0.6.1 23:12:47 Jinja2==3.0.3 23:12:47 jmespath==0.10.0 23:12:47 jsonpatch==1.32 23:12:47 jsonpath-rw==1.4.0 23:12:47 jsonpointer==2.3 23:12:47 kafka-python==2.0.2 23:12:47 lxml==5.1.0 23:12:47 MarkupSafe==2.0.1 23:12:47 more-itertools==5.0.0 23:12:47 netaddr==0.8.0 23:12:47 netifaces==0.11.0 23:12:47 odltools==0.1.28 23:12:47 ordered-set==4.0.2 23:12:47 paramiko==3.4.0 23:12:47 pbr==6.0.0 23:12:47 pkg_resources==0.0.0 23:12:47 ply==3.11 23:12:47 protobuf==3.19.6 23:12:47 pyang==2.6.0 23:12:47 pyangbind==0.8.1 23:12:47 pycparser==2.21 23:12:47 pyhocon==0.3.60 23:12:47 PyNaCl==1.5.0 23:12:47 pyparsing==3.1.1 23:12:47 python-dateutil==2.8.2 23:12:47 PyYAML==6.0.1 23:12:47 regex==2023.8.8 23:12:47 requests==2.27.1 23:12:47 robotframework==6.1.1 23:12:47 robotframework-httplibrary==0.4.2 23:12:47 robotframework-onap==0.6.0.dev105 23:12:47 robotframework-pythonlibcore==3.0.0 23:12:47 robotframework-requests==0.9.4 23:12:47 robotframework-selenium2library==3.0.0 23:12:47 robotframework-seleniumlibrary==5.1.3 23:12:47 robotframework-sshlibrary==3.8.0 23:12:47 robotlibcore-temp==1.0.2 23:12:47 scapy==2.5.0 23:12:47 scp==0.14.5 23:12:47 selenium==3.141.0 23:12:47 six==1.16.0 23:12:47 soupsieve==2.3.2.post1 23:12:47 urllib3==1.26.18 23:12:47 waitress==2.0.0 23:12:47 WebOb==1.8.7 23:12:47 websocket-client==1.3.1 23:12:47 WebTest==3.0.0 23:12:47 zipp==3.6.0 23:12:47 ++ uname 23:12:47 ++ grep -q Linux 23:12:47 ++ sudo apt-get -y -qq install libxml2-utils 23:12:47 + load_set 23:12:47 + _setopts=ehuxB 23:12:47 ++ tr : ' ' 23:12:47 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o braceexpand 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o hashall 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o interactive-comments 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o nounset 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o xtrace 23:12:47 ++ echo ehuxB 23:12:47 ++ sed 's/./& /g' 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +e 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +h 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +u 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +x 23:12:47 + source_safely /tmp/tmp.Wga7XYV1uF/bin/activate 23:12:47 + '[' -z /tmp/tmp.Wga7XYV1uF/bin/activate ']' 23:12:47 + relax_set 23:12:47 + set +e 23:12:47 + set +o pipefail 23:12:47 + . /tmp/tmp.Wga7XYV1uF/bin/activate 23:12:47 ++ deactivate nondestructive 23:12:47 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:47 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:47 ++ export PATH 23:12:47 ++ unset _OLD_VIRTUAL_PATH 23:12:47 ++ '[' -n '' ']' 23:12:47 ++ '[' -n /bin/bash -o -n '' ']' 23:12:47 ++ hash -r 23:12:47 ++ '[' -n '' ']' 23:12:47 ++ unset VIRTUAL_ENV 23:12:47 ++ '[' '!' nondestructive = nondestructive ']' 23:12:47 ++ VIRTUAL_ENV=/tmp/tmp.Wga7XYV1uF 23:12:47 ++ export VIRTUAL_ENV 23:12:47 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:47 ++ PATH=/tmp/tmp.Wga7XYV1uF/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:47 ++ export PATH 23:12:47 ++ '[' -n '' ']' 23:12:47 ++ '[' -z '' ']' 23:12:47 ++ _OLD_VIRTUAL_PS1='(tmp.Wga7XYV1uF) ' 23:12:47 ++ '[' 'x(tmp.Wga7XYV1uF) ' '!=' x ']' 23:12:47 ++ PS1='(tmp.Wga7XYV1uF) (tmp.Wga7XYV1uF) ' 23:12:47 ++ export PS1 23:12:47 ++ '[' -n /bin/bash -o -n '' ']' 23:12:47 ++ hash -r 23:12:47 + load_set 23:12:47 + _setopts=hxB 23:12:47 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:47 ++ tr : ' ' 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o braceexpand 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o hashall 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o interactive-comments 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o xtrace 23:12:47 ++ echo hxB 23:12:47 ++ sed 's/./& /g' 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +h 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +x 23:12:47 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:47 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:47 + export TEST_OPTIONS= 23:12:47 + TEST_OPTIONS= 23:12:47 ++ mktemp -d 23:12:47 + WORKDIR=/tmp/tmp.Nh0lglCdc7 23:12:47 + cd /tmp/tmp.Nh0lglCdc7 23:12:47 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:48 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:48 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:48 Configure a credential helper to remove this warning. See 23:12:48 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:48 23:12:48 Login Succeeded 23:12:48 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:48 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:48 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:48 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:48 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:48 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:48 + relax_set 23:12:48 + set +e 23:12:48 + set +o pipefail 23:12:48 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:48 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:48 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:48 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:48 +++ GERRIT_BRANCH=master 23:12:48 +++ echo GERRIT_BRANCH=master 23:12:48 GERRIT_BRANCH=master 23:12:48 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:48 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:48 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:48 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:49 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:49 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:49 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:49 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:49 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:49 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:49 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:49 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:49 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:49 +++ grafana=false 23:12:49 +++ gui=false 23:12:49 +++ [[ 2 -gt 0 ]] 23:12:49 +++ key=apex-pdp 23:12:49 +++ case $key in 23:12:49 +++ echo apex-pdp 23:12:49 apex-pdp 23:12:49 +++ component=apex-pdp 23:12:49 +++ shift 23:12:49 +++ [[ 1 -gt 0 ]] 23:12:49 +++ key=--grafana 23:12:49 +++ case $key in 23:12:49 +++ grafana=true 23:12:49 +++ shift 23:12:49 +++ [[ 0 -gt 0 ]] 23:12:49 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:49 +++ echo 'Configuring docker compose...' 23:12:49 Configuring docker compose... 23:12:49 +++ source export-ports.sh 23:12:49 +++ source get-versions.sh 23:12:51 +++ '[' -z pap ']' 23:12:51 +++ '[' -n apex-pdp ']' 23:12:51 +++ '[' apex-pdp == logs ']' 23:12:51 +++ '[' true = true ']' 23:12:51 +++ echo 'Starting apex-pdp application with Grafana' 23:12:51 Starting apex-pdp application with Grafana 23:12:51 +++ docker-compose up -d apex-pdp grafana 23:12:51 Creating network "compose_default" with the default driver 23:12:52 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:52 latest: Pulling from prom/prometheus 23:12:55 Digest: sha256:042258e3578a558ce41b036104dfa997b2d25151ab6889a3f4d6187e27b1176c 23:12:55 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:55 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:55 latest: Pulling from grafana/grafana 23:13:00 Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 23:13:00 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:00 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:01 10.10.2: Pulling from mariadb 23:13:06 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:06 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:06 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:06 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:11 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:11 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:11 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:11 latest: Pulling from confluentinc/cp-zookeeper 23:13:22 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:22 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:22 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:23 latest: Pulling from confluentinc/cp-kafka 23:13:26 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:26 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:26 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:26 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:31 Digest: sha256:59b5cc74cb5bbcb86c2e85d974415cfa4a6270c5728a7a489a5c6eece42f2b45 23:13:31 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:31 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:31 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:39 Digest: sha256:71cc3c3555fddbd324c5ddec27e24db340b82732d2f6ce50eddcfdf6715a7ab2 23:13:39 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:39 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:40 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:41 Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 23:13:41 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:41 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:41 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:55 Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 23:13:55 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:55 Creating mariadb ... 23:13:55 Creating prometheus ... 23:13:55 Creating compose_zookeeper_1 ... 23:13:55 Creating simulator ... 23:14:11 Creating prometheus ... done 23:14:11 Creating grafana ... 23:14:12 Creating mariadb ... done 23:14:12 Creating policy-db-migrator ... 23:14:13 Creating policy-db-migrator ... done 23:14:13 Creating policy-api ... 23:14:14 Creating policy-api ... done 23:14:15 Creating grafana ... done 23:14:16 Creating compose_zookeeper_1 ... done 23:14:16 Creating kafka ... 23:14:17 Creating simulator ... done 23:14:19 Creating kafka ... done 23:14:19 Creating policy-pap ... 23:14:20 Creating policy-pap ... done 23:14:20 Creating policy-apex-pdp ... 23:14:21 Creating policy-apex-pdp ... done 23:14:21 +++ echo 'Prometheus server: http://localhost:30259' 23:14:21 Prometheus server: http://localhost:30259 23:14:21 +++ echo 'Grafana server: http://localhost:30269' 23:14:21 Grafana server: http://localhost:30269 23:14:21 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:21 ++ sleep 10 23:14:31 ++ unset http_proxy https_proxy 23:14:31 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:31 Waiting for REST to come up on localhost port 30003... 23:14:31 NAMES STATUS 23:14:31 policy-apex-pdp Up 10 seconds 23:14:31 policy-pap Up 11 seconds 23:14:31 kafka Up 12 seconds 23:14:31 policy-api Up 17 seconds 23:14:31 grafana Up 16 seconds 23:14:31 simulator Up 13 seconds 23:14:31 compose_zookeeper_1 Up 14 seconds 23:14:31 mariadb Up 18 seconds 23:14:31 prometheus Up 19 seconds 23:14:36 NAMES STATUS 23:14:36 policy-apex-pdp Up 15 seconds 23:14:36 policy-pap Up 16 seconds 23:14:36 kafka Up 17 seconds 23:14:36 policy-api Up 22 seconds 23:14:36 grafana Up 21 seconds 23:14:36 simulator Up 18 seconds 23:14:36 compose_zookeeper_1 Up 19 seconds 23:14:36 mariadb Up 23 seconds 23:14:36 prometheus Up 25 seconds 23:14:41 NAMES STATUS 23:14:41 policy-apex-pdp Up 20 seconds 23:14:41 policy-pap Up 21 seconds 23:14:41 kafka Up 22 seconds 23:14:41 policy-api Up 27 seconds 23:14:41 grafana Up 26 seconds 23:14:41 simulator Up 23 seconds 23:14:41 compose_zookeeper_1 Up 24 seconds 23:14:41 mariadb Up 29 seconds 23:14:41 prometheus Up 30 seconds 23:14:46 NAMES STATUS 23:14:46 policy-apex-pdp Up 25 seconds 23:14:46 policy-pap Up 26 seconds 23:14:46 kafka Up 27 seconds 23:14:46 policy-api Up 32 seconds 23:14:46 grafana Up 31 seconds 23:14:46 simulator Up 28 seconds 23:14:46 compose_zookeeper_1 Up 29 seconds 23:14:46 mariadb Up 34 seconds 23:14:46 prometheus Up 35 seconds 23:14:51 NAMES STATUS 23:14:51 policy-apex-pdp Up 30 seconds 23:14:51 policy-pap Up 31 seconds 23:14:51 kafka Up 32 seconds 23:14:51 policy-api Up 37 seconds 23:14:51 grafana Up 36 seconds 23:14:51 simulator Up 34 seconds 23:14:51 compose_zookeeper_1 Up 34 seconds 23:14:51 mariadb Up 39 seconds 23:14:51 prometheus Up 40 seconds 23:14:56 NAMES STATUS 23:14:56 policy-apex-pdp Up 35 seconds 23:14:56 policy-pap Up 36 seconds 23:14:56 kafka Up 37 seconds 23:14:56 policy-api Up 42 seconds 23:14:56 grafana Up 41 seconds 23:14:56 simulator Up 39 seconds 23:14:56 compose_zookeeper_1 Up 40 seconds 23:14:56 mariadb Up 44 seconds 23:14:56 prometheus Up 45 seconds 23:14:56 ++ export 'SUITES=pap-test.robot 23:14:56 pap-slas.robot' 23:14:56 ++ SUITES='pap-test.robot 23:14:56 pap-slas.robot' 23:14:56 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:56 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:56 + load_set 23:14:56 + _setopts=hxB 23:14:56 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:56 ++ tr : ' ' 23:14:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:56 + set +o braceexpand 23:14:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:56 + set +o hashall 23:14:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:56 + set +o interactive-comments 23:14:56 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:56 + set +o xtrace 23:14:56 ++ echo hxB 23:14:56 ++ sed 's/./& /g' 23:14:56 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:56 + set +h 23:14:56 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:56 + set +x 23:14:56 + docker_stats 23:14:56 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:56 ++ uname -s 23:14:56 + '[' Linux == Darwin ']' 23:14:56 + sh -c 'top -bn1 | head -3' 23:14:57 top - 23:14:57 up 4 min, 0 users, load average: 3.13, 1.44, 0.58 23:14:57 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:57 %Cpu(s): 13.6 us, 2.9 sy, 0.0 ni, 79.0 id, 4.4 wa, 0.0 hi, 0.0 si, 0.0 st 23:14:57 + echo 23:14:57 23:14:57 + sh -c 'free -h' 23:14:57 total used free shared buff/cache available 23:14:57 Mem: 31G 2.9G 22G 1.3M 6.2G 28G 23:14:57 Swap: 1.0G 0B 1.0G 23:14:57 + echo 23:14:57 23:14:57 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:57 NAMES STATUS 23:14:57 policy-apex-pdp Up 35 seconds 23:14:57 policy-pap Up 36 seconds 23:14:57 kafka Up 38 seconds 23:14:57 policy-api Up 42 seconds 23:14:57 grafana Up 41 seconds 23:14:57 simulator Up 39 seconds 23:14:57 compose_zookeeper_1 Up 40 seconds 23:14:57 mariadb Up 44 seconds 23:14:57 prometheus Up 45 seconds 23:14:57 + echo 23:14:57 + docker stats --no-stream 23:14:57 23:14:59 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:59 dc441b915cab policy-apex-pdp 1.31% 185.6MiB / 31.41GiB 0.58% 10.2kB / 20kB 0B / 0B 49 23:14:59 5fd9678fc98d policy-pap 1.75% 462.6MiB / 31.41GiB 1.44% 32.3kB / 33.8kB 0B / 153MB 61 23:14:59 a31d97e8bb12 kafka 5.81% 398.4MiB / 31.41GiB 1.24% 75.9kB / 79.6kB 0B / 512kB 85 23:14:59 2aa965e89e62 policy-api 0.13% 770.3MiB / 31.41GiB 2.39% 1MB / 737kB 0B / 0B 56 23:14:59 1e9eb28c678a grafana 0.02% 49.85MiB / 31.41GiB 0.15% 18.9kB / 3.55kB 0B / 24MB 17 23:14:59 fe76a8ef66c7 simulator 0.09% 124.3MiB / 31.41GiB 0.39% 1.27kB / 0B 225kB / 0B 76 23:14:59 10eb860b5193 compose_zookeeper_1 0.13% 99.47MiB / 31.41GiB 0.31% 56.4kB / 49.8kB 0B / 385kB 60 23:14:59 f2e6a844e46f mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 995kB / 1.19MB 11MB / 68.3MB 40 23:14:59 cde7f4d777b4 prometheus 0.00% 19.56MiB / 31.41GiB 0.06% 39.4kB / 1.95kB 4.1kB / 0B 13 23:14:59 + echo 23:14:59 23:14:59 + cd /tmp/tmp.Nh0lglCdc7 23:14:59 + echo 'Reading the testplan:' 23:14:59 Reading the testplan: 23:14:59 + echo 'pap-test.robot 23:14:59 pap-slas.robot' 23:14:59 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:59 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:59 + cat testplan.txt 23:14:59 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:59 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:59 ++ xargs 23:14:59 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:59 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:59 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:59 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:59 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:59 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:59 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:59 + relax_set 23:14:59 + set +e 23:14:59 + set +o pipefail 23:14:59 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:59 ============================================================================== 23:14:59 pap 23:14:59 ============================================================================== 23:15:00 pap.Pap-Test 23:15:00 ============================================================================== 23:15:00 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:15:00 ------------------------------------------------------------------------------ 23:15:01 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:01 ------------------------------------------------------------------------------ 23:15:01 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:01 ------------------------------------------------------------------------------ 23:15:01 Healthcheck :: Verify policy pap health check | PASS | 23:15:01 ------------------------------------------------------------------------------ 23:15:22 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:22 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:22 ------------------------------------------------------------------------------ 23:15:23 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:23 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:23 ------------------------------------------------------------------------------ 23:15:24 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:24 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:24 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:24 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:24 ------------------------------------------------------------------------------ 23:15:25 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:25 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:25 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:25 ------------------------------------------------------------------------------ 23:15:45 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:45 ------------------------------------------------------------------------------ 23:15:45 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:45 ------------------------------------------------------------------------------ 23:15:45 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:45 ------------------------------------------------------------------------------ 23:15:46 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:46 ------------------------------------------------------------------------------ 23:15:46 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:46 ------------------------------------------------------------------------------ 23:15:46 pap.Pap-Test | PASS | 23:15:46 22 tests, 22 passed, 0 failed 23:15:46 ============================================================================== 23:15:46 pap.Pap-Slas 23:15:46 ============================================================================== 23:16:46 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:46 ------------------------------------------------------------------------------ 23:16:46 pap.Pap-Slas | PASS | 23:16:46 8 tests, 8 passed, 0 failed 23:16:46 ============================================================================== 23:16:46 pap | PASS | 23:16:46 30 tests, 30 passed, 0 failed 23:16:46 ============================================================================== 23:16:46 Output: /tmp/tmp.Nh0lglCdc7/output.xml 23:16:46 Log: /tmp/tmp.Nh0lglCdc7/log.html 23:16:46 Report: /tmp/tmp.Nh0lglCdc7/report.html 23:16:46 + RESULT=0 23:16:46 + load_set 23:16:46 + _setopts=hxB 23:16:46 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:46 ++ tr : ' ' 23:16:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:46 + set +o braceexpand 23:16:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:46 + set +o hashall 23:16:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:46 + set +o interactive-comments 23:16:46 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:46 + set +o xtrace 23:16:46 ++ echo hxB 23:16:46 ++ sed 's/./& /g' 23:16:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:46 + set +h 23:16:46 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:46 + set +x 23:16:46 + echo 'RESULT: 0' 23:16:46 RESULT: 0 23:16:46 + exit 0 23:16:46 + on_exit 23:16:46 + rc=0 23:16:46 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:46 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:46 NAMES STATUS 23:16:46 policy-apex-pdp Up 2 minutes 23:16:46 policy-pap Up 2 minutes 23:16:46 kafka Up 2 minutes 23:16:46 policy-api Up 2 minutes 23:16:46 grafana Up 2 minutes 23:16:46 simulator Up 2 minutes 23:16:46 compose_zookeeper_1 Up 2 minutes 23:16:46 mariadb Up 2 minutes 23:16:46 prometheus Up 2 minutes 23:16:46 + docker_stats 23:16:46 ++ uname -s 23:16:46 + '[' Linux == Darwin ']' 23:16:46 + sh -c 'top -bn1 | head -3' 23:16:47 top - 23:16:47 up 6 min, 0 users, load average: 0.83, 1.16, 0.57 23:16:47 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:47 %Cpu(s): 10.8 us, 2.2 sy, 0.0 ni, 83.4 id, 3.5 wa, 0.0 hi, 0.0 si, 0.1 st 23:16:47 + echo 23:16:47 23:16:47 + sh -c 'free -h' 23:16:47 total used free shared buff/cache available 23:16:47 Mem: 31G 3.0G 22G 1.3M 6.2G 27G 23:16:47 Swap: 1.0G 0B 1.0G 23:16:47 + echo 23:16:47 23:16:47 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:47 NAMES STATUS 23:16:47 policy-apex-pdp Up 2 minutes 23:16:47 policy-pap Up 2 minutes 23:16:47 kafka Up 2 minutes 23:16:47 policy-api Up 2 minutes 23:16:47 grafana Up 2 minutes 23:16:47 simulator Up 2 minutes 23:16:47 compose_zookeeper_1 Up 2 minutes 23:16:47 mariadb Up 2 minutes 23:16:47 prometheus Up 2 minutes 23:16:47 + echo 23:16:47 23:16:47 + docker stats --no-stream 23:16:49 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:49 dc441b915cab policy-apex-pdp 0.48% 191.6MiB / 31.41GiB 0.60% 58kB / 92.9kB 0B / 0B 52 23:16:49 5fd9678fc98d policy-pap 0.50% 497.1MiB / 31.41GiB 1.55% 2.33MB / 819kB 0B / 153MB 65 23:16:49 a31d97e8bb12 kafka 3.29% 400.4MiB / 31.41GiB 1.24% 245kB / 220kB 0B / 610kB 85 23:16:49 2aa965e89e62 policy-api 0.11% 770.3MiB / 31.41GiB 2.39% 2.49MB / 1.29MB 0B / 0B 58 23:16:49 1e9eb28c678a grafana 0.03% 59.95MiB / 31.41GiB 0.19% 19.8kB / 4.54kB 0B / 24MB 17 23:16:49 fe76a8ef66c7 simulator 0.06% 124.4MiB / 31.41GiB 0.39% 1.5kB / 0B 225kB / 0B 78 23:16:49 10eb860b5193 compose_zookeeper_1 0.11% 99.47MiB / 31.41GiB 0.31% 59.4kB / 51.5kB 0B / 385kB 60 23:16:49 f2e6a844e46f mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.6MB 28 23:16:49 cde7f4d777b4 prometheus 0.00% 25.29MiB / 31.41GiB 0.08% 219kB / 11.8kB 4.1kB / 0B 13 23:16:49 + echo 23:16:49 23:16:49 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:49 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:49 + relax_set 23:16:49 + set +e 23:16:49 + set +o pipefail 23:16:49 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:49 ++ echo 'Shut down started!' 23:16:49 Shut down started! 23:16:49 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:49 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:49 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:49 ++ source export-ports.sh 23:16:49 ++ source get-versions.sh 23:16:51 ++ echo 'Collecting logs from docker compose containers...' 23:16:51 Collecting logs from docker compose containers... 23:16:51 ++ docker-compose logs 23:16:53 ++ cat docker_compose.log 23:16:53 Attaching to policy-apex-pdp, policy-pap, kafka, policy-api, policy-db-migrator, grafana, simulator, compose_zookeeper_1, mariadb, prometheus 23:16:53 zookeeper_1 | ===> User 23:16:53 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:53 zookeeper_1 | ===> Configuring ... 23:16:53 zookeeper_1 | ===> Running preflight checks ... 23:16:53 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:53 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:53 zookeeper_1 | ===> Launching ... 23:16:53 zookeeper_1 | ===> Launching zookeeper ... 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,723] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,732] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,732] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,732] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,732] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,734] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,734] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,734] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,734] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,736] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,736] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,736] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,736] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,736] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,736] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,736] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,750] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,752] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,753] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,755] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,765] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,765] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,765] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,765] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,765] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,766] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,766] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,766] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,766] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,766] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:host.name=10eb860b5193 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626168028Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-25T23:14:15Z 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626636877Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626653987Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626657807Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626661547Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626665327Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626668647Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626671837Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626675018Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626678948Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626683138Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626686518Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626689788Z level=info msg=Target target=[all] 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626696598Z level=info msg="Path Home" path=/usr/share/grafana 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626701308Z level=info msg="Path Data" path=/var/lib/grafana 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626704638Z level=info msg="Path Logs" path=/var/log/grafana 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626709168Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626713228Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:53 grafana | logger=settings t=2024-02-25T23:14:15.626717818Z level=info msg="App mode production" 23:16:53 grafana | logger=sqlstore t=2024-02-25T23:14:15.627100636Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:53 grafana | logger=sqlstore t=2024-02-25T23:14:15.627130456Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.62783049Z level=info msg="Starting DB migrations" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.629204916Z level=info msg="Executing migration" id="create migration_log table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.630130304Z level=info msg="Migration successfully executed" id="create migration_log table" duration=924.768µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.634606041Z level=info msg="Executing migration" id="create user table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.635192362Z level=info msg="Migration successfully executed" id="create user table" duration=583.971µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.642062415Z level=info msg="Executing migration" id="add unique index user.login" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.643414101Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.350096ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.649322765Z level=info msg="Executing migration" id="add unique index user.email" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.650567209Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.242934ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.657515293Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.658790628Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.274255ms 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,767] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,768] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,769] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,770] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,770] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,770] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,774] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,774] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,774] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,774] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,774] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,805] INFO Logging initialized @684ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,906] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,906] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,929] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,966] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,966] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,968] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,975] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:53 zookeeper_1 | [2024-02-25 23:14:20,985] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,002] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,003] INFO Started @882ms (org.eclipse.jetty.server.Server) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,003] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,008] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,010] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,011] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,013] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,028] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,029] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,030] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,030] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,037] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,037] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,041] INFO Snapshot loaded in 11 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,042] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,043] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,053] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,053] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,069] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:53 zookeeper_1 | [2024-02-25 23:14:21,070] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:53 zookeeper_1 | [2024-02-25 23:14:23,562] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.707723363Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.708853784Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.130521ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.715981502Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.7204989Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.515638ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.725039827Z level=info msg="Executing migration" id="create user table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.725998446Z level=info msg="Migration successfully executed" id="create user table v2" duration=959.319µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.730272278Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.731098754Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=825.766µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.738392945Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.739255231Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=861.996µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.743997693Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.744740137Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=741.534µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.749655292Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.750534489Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=856.987µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.756957994Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.758797639Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.838495ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.763023401Z level=info msg="Executing migration" id="Update user table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.763054811Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.99µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.767893115Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.769499676Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.60139ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.773451823Z level=info msg="Executing migration" id="Add missing user data" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.773891881Z level=info msg="Migration successfully executed" id="Add missing user data" duration=439.559µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.779898557Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:53 mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:53 mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:53 mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:53 mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Initializing database files 23:16:53 mariadb | 2024-02-25 23:14:12 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:53 mariadb | 2024-02-25 23:14:12 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:53 mariadb | 2024-02-25 23:14:12 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:53 mariadb | 23:16:53 mariadb | 23:16:53 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:53 mariadb | To do so, start the server, then issue the following command: 23:16:53 mariadb | 23:16:53 mariadb | '/usr/bin/mysql_secure_installation' 23:16:53 mariadb | 23:16:53 mariadb | which will also give you the option of removing the test 23:16:53 mariadb | databases and anonymous user created by default. This is 23:16:53 mariadb | strongly recommended for production servers. 23:16:53 mariadb | 23:16:53 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:53 mariadb | 23:16:53 mariadb | Please report any problems at https://mariadb.org/jira 23:16:53 mariadb | 23:16:53 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:53 mariadb | 23:16:53 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:53 mariadb | https://mariadb.org/get-involved/ 23:16:53 mariadb | 23:16:53 mariadb | 2024-02-25 23:14:14+00:00 [Note] [Entrypoint]: Database files initialized 23:16:53 mariadb | 2024-02-25 23:14:14+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:53 mariadb | 2024-02-25 23:14:14+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Number of transaction pools: 1 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: 128 rollback segments are active. 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:53 mariadb | 2024-02-25 23:14:14 0 [Note] mariadbd: ready for connections. 23:16:53 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:53 mariadb | 2024-02-25 23:14:15+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:53 mariadb | 2024-02-25 23:14:17+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.781036618Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.137301ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.78473373Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.785540526Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=806.126µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.789312358Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.790601644Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.285675ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.794727933Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.808527839Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.810686ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.814483384Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.815114696Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=626.842µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.818976021Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.819818667Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=842.596µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.823488838Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.824350565Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=861.527µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.830576895Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.83135693Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=780.655µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.835867878Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.837088301Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.221704ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.841661399Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.84169978Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=38.961µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.849321418Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.850104952Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=785.875µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.855263572Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.856446645Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.183063ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.860926121Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.862105784Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.179603ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.868089339Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.869343004Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.252995ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.874628956Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.879857837Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.229621ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.884010717Z level=info msg="Executing migration" id="create temp_user v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.884803532Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=791.935µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.890267658Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.891081934Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=813.996µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.895887776Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.896719633Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=831.497µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.900897563Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.901740419Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=842.316µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.907158024Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.90850959Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.350696ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.91365463Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.914362543Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=709.783µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.920202336Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.920753966Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=550.99µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.924696143Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.925363726Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=654.293µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.930500235Z level=info msg="Executing migration" id="create star table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.931450143Z level=info msg="Migration successfully executed" id="create star table" duration=949.128µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.935816348Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.936630703Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=890.998µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.942260852Z level=info msg="Executing migration" id="create org table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.943448765Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.187063ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.952479039Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:53 mariadb | 2024-02-25 23:14:17+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:53 mariadb | 23:16:53 mariadb | 2024-02-25 23:14:17+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:53 mariadb | 23:16:53 mariadb | 2024-02-25 23:14:17+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:53 mariadb | #!/bin/bash -xv 23:16:53 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:53 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:53 mariadb | # 23:16:53 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:53 mariadb | # you may not use this file except in compliance with the License. 23:16:53 mariadb | # You may obtain a copy of the License at 23:16:53 mariadb | # 23:16:53 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:53 mariadb | # 23:16:53 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:53 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:53 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:53 mariadb | # See the License for the specific language governing permissions and 23:16:53 mariadb | # limitations under the License. 23:16:53 mariadb | 23:16:53 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:53 mariadb | do 23:16:53 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:53 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:53 mariadb | done 23:16:53 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:53 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:53 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:53 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:53 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:53 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:53 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:53 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:53 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:53 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:53 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:53 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:53 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:53 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:53 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:53 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:53 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:53 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:53 mariadb | 23:16:53 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:53 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:53 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:53 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:53 mariadb | 23:16:53 mariadb | 2024-02-25 23:14:18+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Starting shutdown... 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Buffer pool(s) dump completed at 240225 23:14:18 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Shutdown completed; log sequence number 329120; transaction id 298 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd: Shutdown complete 23:16:53 mariadb | 23:16:53 mariadb | 2024-02-25 23:14:18+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:53 mariadb | 23:16:53 mariadb | 2024-02-25 23:14:18+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:53 mariadb | 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.953840916Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.361637ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.958889703Z level=info msg="Executing migration" id="create org_user table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.959928153Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.0375ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.964657405Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.966057541Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.399956ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.97118908Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.971999746Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=807.106µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.97737186Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.978840118Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.459328ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.983137301Z level=info msg="Executing migration" id="Update org table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.983180742Z level=info msg="Migration successfully executed" id="Update org table charset" duration=44.741µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.987433474Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.987475665Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=39.831µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.99137175Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.991636665Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=264.395µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.997453248Z level=info msg="Executing migration" id="create dashboard table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:15.998632551Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.178492ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.003525305Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.004863841Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.338176ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.009103412Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.010476038Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.371986ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.014497055Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.015175108Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=677.353µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.02053916Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.021373836Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=834.266µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.02631477Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.027941201Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.631751ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.033479886Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.042241233Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.764027ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.047788679Z level=info msg="Executing migration" id="create dashboard v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.048758158Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=968.799µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.052837996Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.053811335Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=976.569µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.057910163Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.058857011Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=947.248µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.065356026Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.06609406Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=737.834µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.070521086Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.071895032Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.374095ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.076158283Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.076244955Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=87.012µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.081292461Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.084428452Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.134761ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.120511874Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.123604014Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.08229ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.129183161Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.130561967Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.378055ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.136213395Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.137659553Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.445978ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.142412564Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.14533551Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.922266ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.150399067Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Number of transaction pools: 1 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:53 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: 128 rollback segments are active. 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: log sequence number 329120; transaction id 299 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] Server socket created on IP: '::'. 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] mariadbd: ready for connections. 23:16:53 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:53 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: Buffer pool(s) load completed at 240225 23:14:19 23:16:53 mariadb | 2024-02-25 23:14:19 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:53 mariadb | 2024-02-25 23:14:19 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:16:53 mariadb | 2024-02-25 23:14:20 39 [Warning] Aborted connection 39 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:53 mariadb | 2024-02-25 23:14:21 85 [Warning] Aborted connection 85 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.151335705Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=936.578µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.157097775Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.158037314Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=939.229µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.161783876Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.161819006Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.54µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.167062447Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.167092778Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=31.851µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.172071113Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.176557638Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=4.468205ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.185382649Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.187922837Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.538248ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.192336522Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.195352449Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.015188ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.199384137Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.201497837Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.11305ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.206502553Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.206747428Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=244.895µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.211217734Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.212412487Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.195723ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.21671803Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.217611507Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=893.427µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.222557802Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.222599733Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=43.391µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.226857164Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.228115808Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.257655ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.234124493Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.234917738Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=788.135µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.240094948Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.251266282Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=11.172974ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.255254229Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.25584169Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=599.531µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.259942188Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.260872697Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=930.429µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.266346881Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.267253679Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=906.378µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.272647453Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.272967069Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=321.616µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.277076497Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.277866953Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=788.835µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.283238236Z level=info msg="Executing migration" id="Add check_sum column" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.286976007Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.738501ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.292040184Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.29287369Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=833.326µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.298238203Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.298422947Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=187.334µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.30224153Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.302421453Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=180.243µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.308541011Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.310009329Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.467598ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.315097647Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.318887169Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.796412ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.3225356Z level=info msg="Executing migration" id="create data_source table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.323289254Z level=info msg="Migration successfully executed" id="create data_source table" duration=752.954µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.32935338Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.330182285Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=828.665µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.334231774Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.33510802Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=876.216µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.33976712Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.340978443Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.211273ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.346224144Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.347380905Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.156621ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.351473024Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.362322183Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=10.849479ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.366743317Z level=info msg="Executing migration" id="create data_source table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.367622385Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=878.798µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.372567799Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.373499267Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=930.878µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.377261289Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.378570344Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.308285ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.383909196Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.385219842Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.309865ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.389429492Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.393110233Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.679271ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.397058219Z level=info msg="Executing migration" id="Add secure json data column" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.399533076Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.474488ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.404624044Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.404650765Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=27.371µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.409013778Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.409223552Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=209.444µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.413579786Z level=info msg="Executing migration" id="Add read_only data column" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.417344498Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.764142ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.423023786Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.42321481Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=197.384µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.428355979Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.428605693Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=249.034µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.432721763Z level=info msg="Executing migration" id="Add uid column" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.4362323Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.509987ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.441073803Z level=info msg="Executing migration" id="Update uid value" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.441409719Z level=info msg="Migration successfully executed" id="Update uid value" duration=336.056µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.446204652Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.447460205Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.255033ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.45288631Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.453776376Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=889.506µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.458565879Z level=info msg="Executing migration" id="create api_key table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.45965637Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.086092ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.464347279Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.465581264Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.233325ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.516739045Z level=info msg="Executing migration" id="add index api_key.key" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.51810662Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.368195ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.523934182Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.524563095Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=628.673µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.531066199Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.53218395Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.117861ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.539109214Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.540234435Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.125361ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.545052248Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.54623442Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.181483ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.551293947Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.560445462Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.155125ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.565351407Z level=info msg="Executing migration" id="create api_key table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.566239475Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=887.218µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.569693131Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.572095667Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=2.399675ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.577835977Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.579207852Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.372676ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.582824361Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.583671858Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=847.247µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.588026621Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.588401209Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=398.987µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.593621929Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.59418237Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=560.591µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.597930682Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.597957633Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=27.94µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.601634703Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.604230913Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.59637ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.609133547Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.611679386Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.545329ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.615933907Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.61610309Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=168.683µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.620600817Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.624049082Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.446285ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.629246412Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.631889723Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.644041ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.63593733Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.636627094Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=689.394µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.641793104Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.642340144Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=546.82µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.647924921Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.649044382Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.118671ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.654261692Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.655437385Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.174763ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.659534484Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.660716856Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.173372ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.665704652Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.666901745Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.196793ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.671300389Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.67136271Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=63.051µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.675761105Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.675790986Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.84µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.680118648Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.68382427Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.697692ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.689414297Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.692139628Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.725121ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.704967475Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.705194479Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=232.534µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.714679102Z level=info msg="Executing migration" id="create quota table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.716052198Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.379146ms 23:16:53 kafka | ===> User 23:16:53 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:53 kafka | ===> Configuring ... 23:16:53 kafka | Running in Zookeeper mode... 23:16:53 kafka | ===> Running preflight checks ... 23:16:53 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:53 kafka | ===> Check if Zookeeper is healthy ... 23:16:53 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:53 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:53 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:53 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:53 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:53 kafka | [2024-02-25 23:14:23,492] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,492] INFO Client environment:host.name=a31d97e8bb12 (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,492] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,492] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,493] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,494] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,494] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,494] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,494] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,497] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,501] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:53 kafka | [2024-02-25 23:14:23,506] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:53 kafka | [2024-02-25 23:14:23,515] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:23,532] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:23,533] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:23,541] INFO Socket connection established, initiating session, client: /172.17.0.9:58238, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:23,579] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003c5ff0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:23,700] INFO Session: 0x1000003c5ff0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:23,700] INFO EventThread shut down for session: 0x1000003c5ff0000 (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:53 kafka | ===> Launching ... 23:16:53 kafka | ===> Launching kafka ... 23:16:53 kafka | [2024-02-25 23:14:24,404] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:53 kafka | [2024-02-25 23:14:24,771] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:53 kafka | [2024-02-25 23:14:24,846] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:53 kafka | [2024-02-25 23:14:24,847] INFO starting (kafka.server.KafkaServer) 23:16:53 kafka | [2024-02-25 23:14:24,847] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:53 kafka | [2024-02-25 23:14:24,861] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:host.name=a31d97e8bb12 (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,867] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) 23:16:53 kafka | [2024-02-25 23:14:24,871] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:53 kafka | [2024-02-25 23:14:24,877] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:24,879] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:53 kafka | [2024-02-25 23:14:24,884] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:24,892] INFO Socket connection established, initiating session, client: /172.17.0.9:53356, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:24,931] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003c5ff0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:53 kafka | [2024-02-25 23:14:24,937] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:53 kafka | [2024-02-25 23:14:25,221] INFO Cluster ID = EgVdN6KHQUyZtQ3qnQB0kQ (kafka.server.KafkaServer) 23:16:53 kafka | [2024-02-25 23:14:25,225] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:53 kafka | [2024-02-25 23:14:25,277] INFO KafkaConfig values: 23:16:53 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:53 kafka | alter.config.policy.class.name = null 23:16:53 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:53 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:53 kafka | authorizer.class.name = 23:16:53 kafka | auto.create.topics.enable = true 23:16:53 kafka | auto.include.jmx.reporter = true 23:16:53 kafka | auto.leader.rebalance.enable = true 23:16:53 kafka | background.threads = 10 23:16:53 kafka | broker.heartbeat.interval.ms = 2000 23:16:53 kafka | broker.id = 1 23:16:53 kafka | broker.id.generation.enable = true 23:16:53 kafka | broker.rack = null 23:16:53 kafka | broker.session.timeout.ms = 9000 23:16:53 kafka | client.quota.callback.class = null 23:16:53 kafka | compression.type = producer 23:16:53 kafka | connection.failed.authentication.delay.ms = 100 23:16:53 kafka | connections.max.idle.ms = 600000 23:16:53 kafka | connections.max.reauth.ms = 0 23:16:53 kafka | control.plane.listener.name = null 23:16:53 kafka | controlled.shutdown.enable = true 23:16:53 kafka | controlled.shutdown.max.retries = 3 23:16:53 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:53 kafka | controller.listener.names = null 23:16:53 kafka | controller.quorum.append.linger.ms = 25 23:16:53 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:53 kafka | controller.quorum.election.timeout.ms = 1000 23:16:53 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:53 kafka | controller.quorum.request.timeout.ms = 2000 23:16:53 kafka | controller.quorum.retry.backoff.ms = 20 23:16:53 kafka | controller.quorum.voters = [] 23:16:53 kafka | controller.quota.window.num = 11 23:16:53 kafka | controller.quota.window.size.seconds = 1 23:16:53 kafka | controller.socket.timeout.ms = 30000 23:16:53 kafka | create.topic.policy.class.name = null 23:16:53 kafka | default.replication.factor = 1 23:16:53 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:53 kafka | delegation.token.expiry.time.ms = 86400000 23:16:53 kafka | delegation.token.master.key = null 23:16:53 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:53 kafka | delegation.token.secret.key = null 23:16:53 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:53 kafka | delete.topic.enable = true 23:16:53 kafka | early.start.listeners = null 23:16:53 kafka | fetch.max.bytes = 57671680 23:16:53 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:53 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:53 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:53 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:53 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:53 kafka | group.consumer.max.size = 2147483647 23:16:53 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:53 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:53 kafka | group.consumer.session.timeout.ms = 45000 23:16:53 kafka | group.coordinator.new.enable = false 23:16:53 kafka | group.coordinator.threads = 1 23:16:53 kafka | group.initial.rebalance.delay.ms = 3000 23:16:53 kafka | group.max.session.timeout.ms = 1800000 23:16:53 kafka | group.max.size = 2147483647 23:16:53 kafka | group.min.session.timeout.ms = 6000 23:16:53 kafka | initial.broker.registration.timeout.ms = 60000 23:16:53 kafka | inter.broker.listener.name = PLAINTEXT 23:16:53 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:53 kafka | kafka.metrics.polling.interval.secs = 10 23:16:53 kafka | kafka.metrics.reporters = [] 23:16:53 kafka | leader.imbalance.check.interval.seconds = 300 23:16:53 kafka | leader.imbalance.per.broker.percentage = 10 23:16:53 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:53 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:53 kafka | log.cleaner.backoff.ms = 15000 23:16:53 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:53 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:53 kafka | log.cleaner.enable = true 23:16:53 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:53 kafka | log.cleaner.io.buffer.size = 524288 23:16:53 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:53 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:53 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:53 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:53 kafka | log.cleaner.threads = 1 23:16:53 kafka | log.cleanup.policy = [delete] 23:16:53 kafka | log.dir = /tmp/kafka-logs 23:16:53 kafka | log.dirs = /var/lib/kafka/data 23:16:53 kafka | log.flush.interval.messages = 9223372036854775807 23:16:53 kafka | log.flush.interval.ms = null 23:16:53 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:53 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:53 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:53 kafka | log.index.interval.bytes = 4096 23:16:53 kafka | log.index.size.max.bytes = 10485760 23:16:53 kafka | log.local.retention.bytes = -2 23:16:53 kafka | log.local.retention.ms = -2 23:16:53 kafka | log.message.downconversion.enable = true 23:16:53 kafka | log.message.format.version = 3.0-IV1 23:16:53 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:53 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:53 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:53 kafka | log.message.timestamp.type = CreateTime 23:16:53 kafka | log.preallocate = false 23:16:53 kafka | log.retention.bytes = -1 23:16:53 kafka | log.retention.check.interval.ms = 300000 23:16:53 kafka | log.retention.hours = 168 23:16:53 kafka | log.retention.minutes = null 23:16:53 kafka | log.retention.ms = null 23:16:53 kafka | log.roll.hours = 168 23:16:53 kafka | log.roll.jitter.hours = 0 23:16:53 kafka | log.roll.jitter.ms = null 23:16:53 kafka | log.roll.ms = null 23:16:53 kafka | log.segment.bytes = 1073741824 23:16:53 kafka | log.segment.delete.delay.ms = 60000 23:16:53 kafka | max.connection.creation.rate = 2147483647 23:16:53 kafka | max.connections = 2147483647 23:16:53 kafka | max.connections.per.ip = 2147483647 23:16:53 kafka | max.connections.per.ip.overrides = 23:16:53 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:53 kafka | message.max.bytes = 1048588 23:16:53 kafka | metadata.log.dir = null 23:16:53 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:53 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:53 kafka | metadata.log.segment.bytes = 1073741824 23:16:53 kafka | metadata.log.segment.min.bytes = 8388608 23:16:53 kafka | metadata.log.segment.ms = 604800000 23:16:53 kafka | metadata.max.idle.interval.ms = 500 23:16:53 kafka | metadata.max.retention.bytes = 104857600 23:16:53 kafka | metadata.max.retention.ms = 604800000 23:16:53 kafka | metric.reporters = [] 23:16:53 kafka | metrics.num.samples = 2 23:16:53 kafka | metrics.recording.level = INFO 23:16:53 kafka | metrics.sample.window.ms = 30000 23:16:53 kafka | min.insync.replicas = 1 23:16:53 kafka | node.id = 1 23:16:53 kafka | num.io.threads = 8 23:16:53 kafka | num.network.threads = 3 23:16:53 kafka | num.partitions = 1 23:16:53 kafka | num.recovery.threads.per.data.dir = 1 23:16:53 kafka | num.replica.alter.log.dirs.threads = null 23:16:53 kafka | num.replica.fetchers = 1 23:16:53 kafka | offset.metadata.max.bytes = 4096 23:16:53 kafka | offsets.commit.required.acks = -1 23:16:53 kafka | offsets.commit.timeout.ms = 5000 23:16:53 kafka | offsets.load.buffer.size = 5242880 23:16:53 kafka | offsets.retention.check.interval.ms = 600000 23:16:53 kafka | offsets.retention.minutes = 10080 23:16:53 kafka | offsets.topic.compression.codec = 0 23:16:53 kafka | offsets.topic.num.partitions = 50 23:16:53 kafka | offsets.topic.replication.factor = 1 23:16:53 kafka | offsets.topic.segment.bytes = 104857600 23:16:53 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:53 kafka | password.encoder.iterations = 4096 23:16:53 kafka | password.encoder.key.length = 128 23:16:53 kafka | password.encoder.keyfactory.algorithm = null 23:16:53 kafka | password.encoder.old.secret = null 23:16:53 kafka | password.encoder.secret = null 23:16:53 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:53 kafka | process.roles = [] 23:16:53 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:53 kafka | producer.id.expiration.ms = 86400000 23:16:53 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:53 kafka | queued.max.request.bytes = -1 23:16:53 kafka | queued.max.requests = 500 23:16:53 kafka | quota.window.num = 11 23:16:53 kafka | quota.window.size.seconds = 1 23:16:53 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:53 kafka | remote.log.manager.task.interval.ms = 30000 23:16:53 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:53 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:53 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:53 kafka | remote.log.manager.thread.pool.size = 10 23:16:53 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:53 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:53 kafka | remote.log.metadata.manager.class.path = null 23:16:53 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:53 kafka | remote.log.metadata.manager.listener.name = null 23:16:53 kafka | remote.log.reader.max.pending.tasks = 100 23:16:53 kafka | remote.log.reader.threads = 10 23:16:53 kafka | remote.log.storage.manager.class.name = null 23:16:53 kafka | remote.log.storage.manager.class.path = null 23:16:53 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:53 kafka | remote.log.storage.system.enable = false 23:16:53 kafka | replica.fetch.backoff.ms = 1000 23:16:53 kafka | replica.fetch.max.bytes = 1048576 23:16:53 kafka | replica.fetch.min.bytes = 1 23:16:53 kafka | replica.fetch.response.max.bytes = 10485760 23:16:53 kafka | replica.fetch.wait.max.ms = 500 23:16:53 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:53 kafka | replica.lag.time.max.ms = 30000 23:16:53 kafka | replica.selector.class = null 23:16:53 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:53 kafka | replica.socket.timeout.ms = 30000 23:16:53 kafka | replication.quota.window.num = 11 23:16:53 kafka | replication.quota.window.size.seconds = 1 23:16:53 kafka | request.timeout.ms = 30000 23:16:53 kafka | reserved.broker.max.id = 1000 23:16:53 kafka | sasl.client.callback.handler.class = null 23:16:53 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:53 kafka | sasl.jaas.config = null 23:16:53 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:53 kafka | sasl.kerberos.service.name = null 23:16:53 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 kafka | sasl.login.callback.handler.class = null 23:16:53 kafka | sasl.login.class = null 23:16:53 kafka | sasl.login.connect.timeout.ms = null 23:16:53 kafka | sasl.login.read.timeout.ms = null 23:16:53 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:53 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:53 kafka | sasl.login.refresh.window.factor = 0.8 23:16:53 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:53 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:53 kafka | sasl.login.retry.backoff.ms = 100 23:16:53 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:53 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:53 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 kafka | sasl.oauthbearer.expected.audience = null 23:16:53 kafka | sasl.oauthbearer.expected.issuer = null 23:16:53 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:53 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:53 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:53 kafka | sasl.server.callback.handler.class = null 23:16:53 kafka | sasl.server.max.receive.size = 524288 23:16:53 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:53 kafka | security.providers = null 23:16:53 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:53 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:53 kafka | socket.connection.setup.timeout.ms = 10000 23:16:53 kafka | socket.listen.backlog.size = 50 23:16:53 kafka | socket.receive.buffer.bytes = 102400 23:16:53 kafka | socket.request.max.bytes = 104857600 23:16:53 kafka | socket.send.buffer.bytes = 102400 23:16:53 kafka | ssl.cipher.suites = [] 23:16:53 kafka | ssl.client.auth = none 23:16:53 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 kafka | ssl.endpoint.identification.algorithm = https 23:16:53 kafka | ssl.engine.factory.class = null 23:16:53 kafka | ssl.key.password = null 23:16:53 kafka | ssl.keymanager.algorithm = SunX509 23:16:53 kafka | ssl.keystore.certificate.chain = null 23:16:53 kafka | ssl.keystore.key = null 23:16:53 kafka | ssl.keystore.location = null 23:16:53 kafka | ssl.keystore.password = null 23:16:53 kafka | ssl.keystore.type = JKS 23:16:53 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:53 kafka | ssl.protocol = TLSv1.3 23:16:53 kafka | ssl.provider = null 23:16:53 kafka | ssl.secure.random.implementation = null 23:16:53 kafka | ssl.trustmanager.algorithm = PKIX 23:16:53 kafka | ssl.truststore.certificates = null 23:16:53 kafka | ssl.truststore.location = null 23:16:53 kafka | ssl.truststore.password = null 23:16:53 kafka | ssl.truststore.type = JKS 23:16:53 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:53 kafka | transaction.max.timeout.ms = 900000 23:16:53 kafka | transaction.partition.verification.enable = true 23:16:53 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:53 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:53 kafka | transaction.state.log.min.isr = 2 23:16:53 kafka | transaction.state.log.num.partitions = 50 23:16:53 kafka | transaction.state.log.replication.factor = 3 23:16:53 kafka | transaction.state.log.segment.bytes = 104857600 23:16:53 kafka | transactional.id.expiration.ms = 604800000 23:16:53 kafka | unclean.leader.election.enable = false 23:16:53 kafka | unstable.api.versions.enable = false 23:16:53 kafka | zookeeper.clientCnxnSocket = null 23:16:53 kafka | zookeeper.connect = zookeeper:2181 23:16:53 kafka | zookeeper.connection.timeout.ms = null 23:16:53 kafka | zookeeper.max.in.flight.requests = 10 23:16:53 kafka | zookeeper.metadata.migration.enable = false 23:16:53 kafka | zookeeper.session.timeout.ms = 18000 23:16:53 kafka | zookeeper.set.acl = false 23:16:53 kafka | zookeeper.ssl.cipher.suites = null 23:16:53 kafka | zookeeper.ssl.client.enable = false 23:16:53 kafka | zookeeper.ssl.crl.enable = false 23:16:53 kafka | zookeeper.ssl.enabled.protocols = null 23:16:53 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:53 kafka | zookeeper.ssl.keystore.location = null 23:16:53 kafka | zookeeper.ssl.keystore.password = null 23:16:53 kafka | zookeeper.ssl.keystore.type = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.720721508Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.721617065Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=895.127µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.725408977Z level=info msg="Executing migration" id="Update quota table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.725442427Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=33.45µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.73288551Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.734137574Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.251294ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.739473536Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.740344094Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=870.378µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.750738893Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.755467614Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.72318ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.761298216Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.761326836Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=29.54µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.79544791Z level=info msg="Executing migration" id="create session table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.796882528Z level=info msg="Migration successfully executed" id="create session table" duration=1.433638ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.986409474Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.986637848Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=227.474µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.996081659Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:16.996438086Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=357.407µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.006788742Z level=info msg="Executing migration" id="create playlist table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.008186638Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.393576ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.013481256Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.014637134Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.155318ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.021042902Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.021121593Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=80.221µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.026261691Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.026443224Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=180.953µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.030502906Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.035411221Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.906816ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.041635745Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.045119408Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.482503ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.051395475Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.051545317Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=149.702µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.055762851Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.056085616Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=321.985µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.062828969Z level=info msg="Executing migration" id="create preferences table v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.064064878Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.237029ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.071274868Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.071451681Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=176.313µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.087176881Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.090632374Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.458382ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.09766379Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.097856303Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=192.793µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.102521135Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.105803565Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.275119ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.109852847Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.114851402Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.998335ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.12060439Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.120701252Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=97.232µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.124433129Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.126074144Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.640095ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.130470711Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.131816091Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.34569ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.137862074Z level=info msg="Executing migration" id="create alert table v1" 23:16:53 policy-api | Waiting for mariadb port 3306... 23:16:53 policy-api | mariadb (172.17.0.2:3306) open 23:16:53 policy-api | Waiting for policy-db-migrator port 6824... 23:16:53 policy-api | policy-db-migrator (172.17.0.7:6824) open 23:16:53 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:53 policy-api | 23:16:53 policy-api | . ____ _ __ _ _ 23:16:53 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:53 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:53 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:53 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:53 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:53 policy-api | :: Spring Boot :: (v3.1.8) 23:16:53 policy-api | 23:16:53 policy-api | [2024-02-25T23:14:27.866+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:53 policy-api | [2024-02-25T23:14:27.869+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:53 policy-api | [2024-02-25T23:14:29.750+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:53 policy-api | [2024-02-25T23:14:29.860+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 99 ms. Found 6 JPA repository interfaces. 23:16:53 policy-api | [2024-02-25T23:14:30.313+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:53 policy-api | [2024-02-25T23:14:30.314+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:53 policy-api | [2024-02-25T23:14:31.076+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:53 policy-api | [2024-02-25T23:14:31.095+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:53 policy-api | [2024-02-25T23:14:31.099+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:53 policy-api | [2024-02-25T23:14:31.099+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:53 policy-api | [2024-02-25T23:14:31.204+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:53 policy-api | [2024-02-25T23:14:31.205+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3255 ms 23:16:53 policy-api | [2024-02-25T23:14:31.678+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:53 policy-api | [2024-02-25T23:14:31.799+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:53 policy-api | [2024-02-25T23:14:31.804+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:53 policy-api | [2024-02-25T23:14:31.854+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:53 policy-api | [2024-02-25T23:14:32.245+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:53 policy-api | [2024-02-25T23:14:32.268+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:53 policy-api | [2024-02-25T23:14:32.376+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 23:16:53 policy-api | [2024-02-25T23:14:32.378+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:53 policy-api | [2024-02-25T23:14:34.435+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:53 policy-api | [2024-02-25T23:14:34.440+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:53 policy-api | [2024-02-25T23:14:35.519+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:53 kafka | zookeeper.ssl.ocsp.enable = false 23:16:53 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:53 kafka | zookeeper.ssl.truststore.location = null 23:16:53 kafka | zookeeper.ssl.truststore.password = null 23:16:53 kafka | zookeeper.ssl.truststore.type = null 23:16:53 kafka | (kafka.server.KafkaConfig) 23:16:53 kafka | [2024-02-25 23:14:25,311] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:53 kafka | [2024-02-25 23:14:25,311] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:53 kafka | [2024-02-25 23:14:25,312] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:53 kafka | [2024-02-25 23:14:25,316] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:53 kafka | [2024-02-25 23:14:25,373] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:53 kafka | [2024-02-25 23:14:25,379] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:53 kafka | [2024-02-25 23:14:25,388] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) 23:16:53 kafka | [2024-02-25 23:14:25,390] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:53 kafka | [2024-02-25 23:14:25,391] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:53 kafka | [2024-02-25 23:14:25,403] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:53 kafka | [2024-02-25 23:14:25,478] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:53 kafka | [2024-02-25 23:14:25,493] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:53 kafka | [2024-02-25 23:14:25,508] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:53 kafka | [2024-02-25 23:14:25,534] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:53 kafka | [2024-02-25 23:14:25,866] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:53 kafka | [2024-02-25 23:14:25,888] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:53 kafka | [2024-02-25 23:14:25,888] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:53 kafka | [2024-02-25 23:14:25,894] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:53 kafka | [2024-02-25 23:14:25,898] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:53 kafka | [2024-02-25 23:14:25,923] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:25,924] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:25,926] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:25,929] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:25,930] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:25,941] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:53 kafka | [2024-02-25 23:14:25,943] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:53 kafka | [2024-02-25 23:14:25,968] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:53 kafka | [2024-02-25 23:14:26,007] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708902865986,1708902865986,1,0,0,72057610244653057,258,0,27 23:16:53 kafka | (kafka.zk.KafkaZkClient) 23:16:53 kafka | [2024-02-25 23:14:26,008] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:53 kafka | [2024-02-25 23:14:26,065] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:53 kafka | [2024-02-25 23:14:26,073] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:26,079] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:26,080] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:26,095] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:26,101] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:53 kafka | [2024-02-25 23:14:26,105] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.139268575Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.406101ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.208697525Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.21038267Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.686035ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.216151539Z level=info msg="Executing migration" id="add index alert state" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.217012022Z level=info msg="Migration successfully executed" id="add index alert state" duration=860.193µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.222482095Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.223980047Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.497472ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.22810883Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.228843731Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=735.581µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.236762072Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.238205393Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.442601ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.243625545Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.244804593Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.179838ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.249305401Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.263517877Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.209036ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.274571044Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.2755304Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=959.196µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.280668948Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.282215571Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.546463ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.393044202Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.393630811Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=581.439µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.400757349Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.401606702Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=858.684µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.406191471Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.406955054Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=764.653µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.411851367Z level=info msg="Executing migration" id="Add column is_default" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.415370101Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.518454ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.422806043Z level=info msg="Executing migration" id="Add column frequency" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.427136579Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.327696ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.43046117Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.433878921Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.420772ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.437347685Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.440757736Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.409661ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.463837216Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.465299808Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.462272ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.469000825Z level=info msg="Executing migration" id="Update alert table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.469082776Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=82.921µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.47460563Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.47462642Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=21.521µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.500883048Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.502065606Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.182648ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.509930495Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.511778793Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.846978ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.518321352Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.519123935Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=799.803µs 23:16:53 kafka | [2024-02-25 23:14:26,116] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,122] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,125] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:53 kafka | [2024-02-25 23:14:26,128] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:53 kafka | [2024-02-25 23:14:26,130] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:53 kafka | [2024-02-25 23:14:26,130] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:53 kafka | [2024-02-25 23:14:26,170] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:53 kafka | [2024-02-25 23:14:26,170] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,172] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:53 kafka | [2024-02-25 23:14:26,183] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,186] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,189] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,204] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,205] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:53 kafka | [2024-02-25 23:14:26,209] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,215] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:53 kafka | [2024-02-25 23:14:26,221] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.525668294Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.526810541Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.141687ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.533136598Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.53463254Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.491492ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.53859542Z level=info msg="Executing migration" id="Add for to alert table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.544556301Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.961401ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.549834961Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.553457775Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.621934ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.559883773Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.560089716Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=206.593µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.563771622Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.565238494Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.467182ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.570856699Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.571722202Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=866.243µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.578614677Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.582501546Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.881729ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.586163002Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.586254003Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=94.751µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.589403651Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.590311675Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=907.784µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.596145633Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.597108878Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=963.025µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.600977456Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.601082668Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=105.382µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.604269636Z level=info msg="Executing migration" id="create annotation table v5" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.605053438Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=783.342µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.611840801Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.613282463Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.441542ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.616864998Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.617850972Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=946.084µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.62101382Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.621951614Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=938.624µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.629085933Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.630351871Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.265518ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.635042393Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.636034838Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=992.665µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.639759615Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.639786975Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.74µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.645481072Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.649411571Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.930649ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.652704731Z level=info msg="Executing migration" id="Drop category_id index" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.653624895Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=919.494µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.656747613Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.660682062Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.931469ms 23:16:53 policy-api | [2024-02-25T23:14:36.405+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:53 policy-api | [2024-02-25T23:14:37.613+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:53 policy-api | [2024-02-25T23:14:37.893+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@607c7f58, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4bbb00a4, org.springframework.security.web.context.SecurityContextHolderFilter@6e11d059, org.springframework.security.web.header.HeaderWriterFilter@1d123972, org.springframework.security.web.authentication.logout.LogoutFilter@54e1e8a7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@206d4413, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@19bd1f98, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@69cf9acb, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@543d242e, org.springframework.security.web.access.ExceptionTranslationFilter@5b3063b7, org.springframework.security.web.access.intercept.AuthorizationFilter@407bfc49] 23:16:53 policy-api | [2024-02-25T23:14:38.918+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:53 policy-api | [2024-02-25T23:14:39.026+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:53 policy-api | [2024-02-25T23:14:39.055+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:53 policy-api | [2024-02-25T23:14:39.073+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.044 seconds (process running for 12.672) 23:16:53 policy-api | [2024-02-25T23:14:39.920+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:53 policy-api | [2024-02-25T23:14:39.920+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:53 policy-api | [2024-02-25T23:14:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms 23:16:53 policy-api | [2024-02-25T23:15:00.221+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 23:16:53 policy-api | [] 23:16:53 kafka | [2024-02-25 23:14:26,225] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:53 kafka | [2024-02-25 23:14:26,231] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:53 kafka | [2024-02-25 23:14:26,231] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:53 kafka | [2024-02-25 23:14:26,235] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,236] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,237] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,237] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,241] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:53 kafka | [2024-02-25 23:14:26,241] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:53 kafka | [2024-02-25 23:14:26,241] INFO Kafka startTimeMs: 1708902866235 (org.apache.kafka.common.utils.AppInfoParser) 23:16:53 kafka | [2024-02-25 23:14:26,241] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,241] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,242] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,242] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:53 kafka | [2024-02-25 23:14:26,242] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:53 kafka | [2024-02-25 23:14:26,244] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,247] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:26,260] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:53 kafka | [2024-02-25 23:14:26,260] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:53 kafka | [2024-02-25 23:14:26,263] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:53 kafka | [2024-02-25 23:14:26,264] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:53 kafka | [2024-02-25 23:14:26,264] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:53 kafka | [2024-02-25 23:14:26,265] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:53 kafka | [2024-02-25 23:14:26,268] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:53 kafka | [2024-02-25 23:14:26,268] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,282] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,285] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,285] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,286] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:53 kafka | [2024-02-25 23:14:26,286] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,287] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,312] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:26,355] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:53 kafka | [2024-02-25 23:14:26,363] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:26,407] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:53 kafka | [2024-02-25 23:14:31,313] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:31,314] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:52,372] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:53 kafka | [2024-02-25 23:14:52,373] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:53 kafka | [2024-02-25 23:14:52,385] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:52,392] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.667184671Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.667828081Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=643.27µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.671638538Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.672567092Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=927.644µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.676848807Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.677758351Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=909.384µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.683908955Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.70011092Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.202105ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.706430216Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.706931154Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=500.848µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.710118822Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.711704236Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.584404ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.719327692Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.719663497Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=331.164µs 23:16:53 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:53 policy-apex-pdp | mariadb (172.17.0.2:3306) open 23:16:53 policy-apex-pdp | Waiting for kafka port 9092... 23:16:53 policy-apex-pdp | kafka (172.17.0.9:9092) open 23:16:53 policy-apex-pdp | Waiting for pap port 6969... 23:16:53 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:53 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.219+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.451+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:53 policy-apex-pdp | allow.auto.create.topics = true 23:16:53 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:53 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:53 policy-apex-pdp | auto.offset.reset = latest 23:16:53 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:53 policy-apex-pdp | check.crcs = true 23:16:53 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:53 policy-apex-pdp | client.id = consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-1 23:16:53 policy-apex-pdp | client.rack = 23:16:53 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:53 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:53 policy-apex-pdp | enable.auto.commit = true 23:16:53 policy-apex-pdp | exclude.internal.topics = true 23:16:53 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:53 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:53 policy-apex-pdp | fetch.min.bytes = 1 23:16:53 policy-apex-pdp | group.id = b53cde7a-481f-427a-882b-d5bcee52ac2a 23:16:53 policy-apex-pdp | group.instance.id = null 23:16:53 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:53 policy-apex-pdp | interceptor.classes = [] 23:16:53 policy-apex-pdp | internal.leave.group.on.close = true 23:16:53 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:53 policy-apex-pdp | isolation.level = read_uncommitted 23:16:53 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:53 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:53 policy-apex-pdp | max.poll.records = 500 23:16:53 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:53 policy-apex-pdp | metric.reporters = [] 23:16:53 policy-apex-pdp | metrics.num.samples = 2 23:16:53 policy-apex-pdp | metrics.recording.level = INFO 23:16:53 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:53 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:53 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:53 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:53 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:53 policy-apex-pdp | request.timeout.ms = 30000 23:16:53 policy-apex-pdp | retry.backoff.ms = 100 23:16:53 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:53 policy-apex-pdp | sasl.jaas.config = null 23:16:53 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:53 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:53 policy-apex-pdp | sasl.login.class = null 23:16:53 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:53 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:53 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:53 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:53 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:53 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:53 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:53 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:53 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:53 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:53 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:53 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:53 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:53 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:53 policy-apex-pdp | security.providers = null 23:16:53 policy-apex-pdp | send.buffer.bytes = 131072 23:16:53 policy-apex-pdp | session.timeout.ms = 45000 23:16:53 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:53 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:53 policy-apex-pdp | ssl.cipher.suites = null 23:16:53 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:53 policy-apex-pdp | ssl.engine.factory.class = null 23:16:53 policy-apex-pdp | ssl.key.password = null 23:16:53 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:53 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:53 policy-apex-pdp | ssl.keystore.key = null 23:16:53 policy-apex-pdp | ssl.keystore.location = null 23:16:53 policy-apex-pdp | ssl.keystore.password = null 23:16:53 policy-apex-pdp | ssl.keystore.type = JKS 23:16:53 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:53 policy-apex-pdp | ssl.provider = null 23:16:53 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:53 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:53 policy-apex-pdp | ssl.truststore.certificates = null 23:16:53 policy-apex-pdp | ssl.truststore.location = null 23:16:53 policy-apex-pdp | ssl.truststore.password = null 23:16:53 policy-apex-pdp | ssl.truststore.type = JKS 23:16:53 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 policy-apex-pdp | 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.620+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.620+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.620+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902893618 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.623+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-1, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Subscribed to topic(s): policy-pdp-pap 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.636+00:00|INFO|ServiceManager|main] service manager starting 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.636+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.640+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b53cde7a-481f-427a-882b-d5bcee52ac2a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.661+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:53 policy-apex-pdp | allow.auto.create.topics = true 23:16:53 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:53 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.723707748Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.724217636Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=509.638µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.730603233Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.730903337Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=299.734µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.738753226Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.744454003Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.705957ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.790002333Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.794906308Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.904835ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.800084517Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.801210113Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.125026ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.806692467Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.807617261Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=918.864µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.811756074Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.811986827Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=230.973µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.818469036Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.824830322Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.360576ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.833626185Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.834349006Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=722.401µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.840836075Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.84112317Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=287.345µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.848409329Z level=info msg="Executing migration" id="Move region to single row" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.848868516Z level=info msg="Migration successfully executed" id="Move region to single row" duration=459.287µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.852966839Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.854322789Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.35595ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.859726301Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.860616845Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=888.554µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.865552989Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.866512255Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=958.966µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.871584732Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.872520646Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=931.844µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.876598887Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.878189592Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.589785ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.885421491Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.887036136Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.614035ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.901030728Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.901216031Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=213.273µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.909147041Z level=info msg="Executing migration" id="create test_data table" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.910325099Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.180368ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.916549644Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.917840543Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.290409ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.928764129Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.931385469Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=2.6265ms 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.936928603Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.937914987Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=986.274µs 23:16:53 policy-db-migrator | Waiting for mariadb port 3306... 23:16:53 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:53 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:53 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:53 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:53 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:53 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:53 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 23:16:53 policy-db-migrator | 321 blocks 23:16:53 policy-db-migrator | Preparing upgrade release version: 0800 23:16:53 policy-db-migrator | Preparing upgrade release version: 0900 23:16:53 policy-db-migrator | Preparing upgrade release version: 1000 23:16:53 policy-db-migrator | Preparing upgrade release version: 1100 23:16:53 policy-db-migrator | Preparing upgrade release version: 1200 23:16:53 policy-db-migrator | Preparing upgrade release version: 1300 23:16:53 policy-db-migrator | Done 23:16:53 policy-db-migrator | name version 23:16:53 policy-db-migrator | policyadmin 0 23:16:53 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:53 policy-db-migrator | upgrade: 0 -> 1300 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.942424866Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.94266288Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=239.854µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.948935305Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.949602145Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=667.01µs 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.957323162Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:53 policy-apex-pdp | auto.offset.reset = latest 23:16:53 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:53 policy-apex-pdp | check.crcs = true 23:16:53 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:53 policy-apex-pdp | client.id = consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2 23:16:53 policy-apex-pdp | client.rack = 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:53 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:53 policy-apex-pdp | enable.auto.commit = true 23:16:53 policy-apex-pdp | exclude.internal.topics = true 23:16:53 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:53 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:53 policy-apex-pdp | fetch.min.bytes = 1 23:16:53 policy-apex-pdp | group.id = b53cde7a-481f-427a-882b-d5bcee52ac2a 23:16:53 policy-apex-pdp | group.instance.id = null 23:16:53 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:53 policy-apex-pdp | interceptor.classes = [] 23:16:53 policy-apex-pdp | internal.leave.group.on.close = true 23:16:53 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:53 policy-apex-pdp | isolation.level = read_uncommitted 23:16:53 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:53 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:53 policy-apex-pdp | max.poll.records = 500 23:16:53 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:53 policy-apex-pdp | metric.reporters = [] 23:16:53 policy-apex-pdp | metrics.num.samples = 2 23:16:53 policy-apex-pdp | metrics.recording.level = INFO 23:16:53 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:53 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:53 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:53 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:53 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:53 policy-apex-pdp | request.timeout.ms = 30000 23:16:53 policy-apex-pdp | retry.backoff.ms = 100 23:16:53 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:53 policy-apex-pdp | sasl.jaas.config = null 23:16:53 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:53 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:53 policy-apex-pdp | sasl.login.class = null 23:16:53 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:53 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:53 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:53 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:53 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:53 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:53 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:53 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:53 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:53 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:53 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:53 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:53 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:53 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:53 policy-apex-pdp | security.providers = null 23:16:53 policy-apex-pdp | send.buffer.bytes = 131072 23:16:53 policy-apex-pdp | session.timeout.ms = 45000 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:53 kafka | [2024-02-25 23:14:52,420] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(9kyEG5R7S_ymSJoFuQGdeg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(19qiw_gSQSuGAZ9hqdP69g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:52,423] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:53 kafka | [2024-02-25 23:14:52,428] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.958020722Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=692.78µs 23:16:53 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:53 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.969311523Z level=info msg="Executing migration" id="create team table" 23:16:53 policy-pap | Waiting for mariadb port 3306... 23:16:53 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.970051995Z level=info msg="Migration successfully executed" id="create team table" duration=744.702µs 23:16:53 policy-pap | mariadb (172.17.0.2:3306) open 23:16:53 policy-apex-pdp | ssl.cipher.suites = null 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.979246785Z level=info msg="Executing migration" id="add index team.org_id" 23:16:53 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:53 policy-pap | Waiting for kafka port 9092... 23:16:53 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.0, branch=HEAD, revision=814b920e8a6345d35712b5857ebd4cb5e90fc107)" 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.98030014Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.052215ms 23:16:53 simulator | overriding logback.xml 23:16:53 policy-pap | kafka (172.17.0.9:9092) open 23:16:53 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@384077e1cf50, date=20240222-09:38:19, tags=netgo,builtinassets,stringlabels)" 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.986751539Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:53 simulator | 2024-02-25 23:14:18,164 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:53 policy-pap | Waiting for api port 6969... 23:16:53 policy-apex-pdp | ssl.engine.factory.class = null 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.987702523Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=950.424µs 23:16:53 simulator | 2024-02-25 23:14:18,248 INFO org.onap.policy.models.simulators starting 23:16:53 policy-pap | api (172.17.0.8:6969) open 23:16:53 policy-apex-pdp | ssl.key.password = null 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:53 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:17.993399739Z level=info msg="Executing migration" id="Add column uid in team" 23:16:53 simulator | 2024-02-25 23:14:18,248 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:53 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:53 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.001561603Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=8.162324ms 23:16:53 simulator | 2024-02-25 23:14:18,459 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:53 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:53 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.550Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.006030581Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:53 simulator | 2024-02-25 23:14:18,460 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:53 policy-pap | 23:16:53 policy-apex-pdp | ssl.keystore.key = null 23:16:53 prometheus | ts=2024-02-25T23:14:11.550Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.006217163Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=186.042µs 23:16:53 simulator | 2024-02-25 23:14:18,584 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:53 policy-pap | . ____ _ __ _ _ 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-apex-pdp | ssl.keystore.location = null 23:16:53 prometheus | ts=2024-02-25T23:14:11.557Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.013073285Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:53 simulator | 2024-02-25 23:14:18,606 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.557Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.014392264Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.318589ms 23:16:53 simulator | 2024-02-25 23:14:18,610 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:53 policy-apex-pdp | ssl.keystore.password = null 23:16:53 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.559Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:53 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.019410479Z level=info msg="Executing migration" id="create team member table" 23:16:53 simulator | 2024-02-25 23:14:18,618 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:53 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:53 policy-apex-pdp | ssl.keystore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.559Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.32µs 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.02010863Z level=info msg="Migration successfully executed" id="create team member table" duration=698.481µs 23:16:53 simulator | 2024-02-25 23:14:18,681 INFO Session workerName=node0 23:16:53 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:53 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.559Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.028252199Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:53 simulator | 2024-02-25 23:14:19,289 INFO Using GSON for REST calls 23:16:53 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:53 policy-apex-pdp | ssl.provider = null 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.560Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.029797113Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.543954ms 23:16:53 simulator | 2024-02-25 23:14:19,464 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:16:53 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:53 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.560Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=174.604µs wal_replay_duration=448.569µs wbl_replay_duration=360ns total_replay_duration=653.233µs 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.037340185Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:53 simulator | 2024-02-25 23:14:19,478 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:53 policy-pap | 23:16:53 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.562Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.039466436Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.12526ms 23:16:53 simulator | 2024-02-25 23:14:19,486 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1852ms 23:16:53 policy-pap | [2024-02-25T23:14:41.472+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 31 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:53 policy-apex-pdp | ssl.truststore.certificates = null 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.562Z caller=main.go:1142 level=info msg="TSDB started" 23:16:53 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.045474875Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:53 simulator | 2024-02-25 23:14:19,486 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4124 ms. 23:16:53 policy-pap | [2024-02-25T23:14:41.474+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:53 policy-apex-pdp | ssl.truststore.location = null 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.562Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.046538161Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.063105ms 23:16:53 simulator | 2024-02-25 23:14:19,495 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:53 policy-pap | [2024-02-25T23:14:43.517+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:53 policy-apex-pdp | ssl.truststore.password = null 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.564Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.204703ms db_storage=2.46µs remote_storage=2.52µs web_handler=790ns query_engine=2.29µs scrape=298.586µs scrape_sd=144.852µs notify=40.741µs notify_sd=12.94µs rules=3.2µs tracing=7.79µs 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.056265704Z level=info msg="Executing migration" id="Add column email to team table" 23:16:53 simulator | 2024-02-25 23:14:19,499 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:53 policy-pap | [2024-02-25T23:14:43.621+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 93 ms. Found 7 JPA repository interfaces. 23:16:53 policy-apex-pdp | ssl.truststore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.564Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.063893387Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.628283ms 23:16:53 simulator | 2024-02-25 23:14:19,500 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-pap | [2024-02-25T23:14:44.042+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:53 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 prometheus | ts=2024-02-25T23:14:11.564Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.070070308Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:53 simulator | 2024-02-25 23:14:19,503 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-pap | [2024-02-25T23:14:44.042+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:53 policy-apex-pdp | 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.075085273Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.013205ms 23:16:53 simulator | 2024-02-25 23:14:19,504 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:53 policy-pap | [2024-02-25T23:14:44.796+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.081618539Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:53 simulator | 2024-02-25 23:14:19,517 INFO Session workerName=node0 23:16:53 policy-pap | [2024-02-25T23:14:44.807+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.08840046Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=6.779421ms 23:16:53 simulator | 2024-02-25 23:14:19,593 INFO Using GSON for REST calls 23:16:53 policy-pap | [2024-02-25T23:14:44.809+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902893669 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.095442584Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:53 simulator | 2024-02-25 23:14:19,608 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:16:53 policy-pap | [2024-02-25T23:14:44.810+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Subscribed to topic(s): policy-pdp-pap 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.096230677Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=787.253µs 23:16:53 simulator | 2024-02-25 23:14:19,610 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:53 policy-pap | [2024-02-25T23:14:44.933+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.103510794Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:53 simulator | 2024-02-25 23:14:19,610 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1976ms 23:16:53 policy-pap | [2024-02-25T23:14:44.934+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3368 ms 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.670+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=39c8ecad-0633-4ba4-9ca4-00222bde67e2, alive=false, publisher=null]]: starting 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.10592978Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.414946ms 23:16:53 policy-pap | [2024-02-25T23:14:45.401+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:53 simulator | 2024-02-25 23:14:19,613 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4890 ms. 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.683+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.112628959Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:53 policy-pap | [2024-02-25T23:14:45.496+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:53 simulator | 2024-02-25 23:14:19,614 INFO org.onap.policy.models.simulators starting SO simulator 23:16:53 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:53 policy-apex-pdp | acks = -1 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.113844237Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.214808ms 23:16:53 policy-pap | [2024-02-25T23:14:45.500+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:53 simulator | 2024-02-25 23:14:19,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.121111755Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:53 policy-pap | [2024-02-25T23:14:45.550+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:53 simulator | 2024-02-25 23:14:19,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:53 policy-apex-pdp | batch.size = 16384 23:16:53 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.122439594Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.327199ms 23:16:53 policy-pap | [2024-02-25T23:14:45.942+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:53 simulator | 2024-02-25 23:14:19,628 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.129759003Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:53 policy-pap | [2024-02-25T23:14:45.966+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:53 simulator | 2024-02-25 23:14:19,628 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | buffer.memory = 33554432 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.130774547Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.014804ms 23:16:53 policy-pap | [2024-02-25T23:14:46.086+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@124ac145 23:16:53 simulator | 2024-02-25 23:14:19,636 INFO Session workerName=node0 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.137399785Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:53 policy-pap | [2024-02-25T23:14:46.089+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:53 simulator | 2024-02-25 23:14:19,706 INFO Using GSON for REST calls 23:16:53 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:53 policy-apex-pdp | client.id = producer-1 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.138652904Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.251939ms 23:16:53 policy-pap | [2024-02-25T23:14:48.229+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:53 simulator | 2024-02-25 23:14:19,723 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | compression.type = none 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.144157776Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:53 policy-pap | [2024-02-25T23:14:48.245+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:53 simulator | 2024-02-25 23:14:19,728 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.146713634Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=2.556778ms 23:16:53 policy-pap | [2024-02-25T23:14:48.788+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:53 simulator | 2024-02-25 23:14:19,729 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @2094ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.153715777Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:53 policy-pap | [2024-02-25T23:14:49.240+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:53 simulator | 2024-02-25 23:14:19,729 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4899 ms. 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | enable.idempotence = true 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.155381722Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.663875ms 23:16:53 policy-pap | [2024-02-25T23:14:49.361+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:53 simulator | 2024-02-25 23:14:19,732 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | interceptor.classes = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.196652393Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:53 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:49.699+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:53 simulator | 2024-02-25 23:14:19,736 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:53 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:53 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.197352933Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=705.66µs 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | allow.auto.create.topics = true 23:16:53 simulator | 2024-02-25 23:14:19,737 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | linger.ms = 0 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.204451658Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | auto.commit.interval.ms = 5000 23:16:53 simulator | 2024-02-25 23:14:19,739 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 policy-apex-pdp | max.block.ms = 60000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.204713431Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=262.563µs 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | auto.include.jmx.reporter = true 23:16:53 simulator | 2024-02-25 23:14:19,740 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.209360081Z level=info msg="Executing migration" id="create tag table" 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | auto.offset.reset = latest 23:16:53 simulator | 2024-02-25 23:14:19,744 INFO Session workerName=node0 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | max.request.size = 1048576 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.210154882Z level=info msg="Migration successfully executed" id="create tag table" duration=794.611µs 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | bootstrap.servers = [kafka:9092] 23:16:53 simulator | 2024-02-25 23:14:19,800 INFO Using GSON for REST calls 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.21881944Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | check.crcs = true 23:16:53 simulator | 2024-02-25 23:14:19,810 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:16:53 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:53 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.219873417Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.055107ms 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:53 simulator | 2024-02-25 23:14:19,813 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | metric.reporters = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.226840669Z level=info msg="Executing migration" id="create login attempt table" 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | client.id = consumer-bd340acf-32e5-46ed-9341-bc882164db21-1 23:16:53 simulator | 2024-02-25 23:14:19,813 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @2179ms 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:53 policy-apex-pdp | metrics.num.samples = 2 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.228130409Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.28859ms 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | client.rack = 23:16:53 simulator | 2024-02-25 23:14:19,813 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4926 ms. 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | metrics.recording.level = INFO 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | connections.max.idle.ms = 540000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.234563744Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:53 simulator | 2024-02-25 23:14:19,815 INFO org.onap.policy.models.simulators started 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | default.api.timeout.ms = 60000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.235815533Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.252909ms 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | enable.auto.commit = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.243320733Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:53 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:53 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | exclude.internal.topics = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.244467381Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.146348ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | partitioner.class = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | fetch.max.bytes = 52428800 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.251620196Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:53 policy-apex-pdp | partitioner.ignore.keys = false 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | fetch.max.wait.ms = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.268661028Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.039472ms 23:16:53 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-pap | fetch.min.bytes = 1 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.275292617Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:53 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:53 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | group.id = bd340acf-32e5-46ed-9341-bc882164db21 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.275807695Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=515.508µs 23:16:53 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | group.instance.id = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.28226459Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:53 policy-apex-pdp | request.timeout.ms = 30000 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 policy-pap | heartbeat.interval.ms = 3000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.283699182Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.434672ms 23:16:53 policy-apex-pdp | retries = 2147483647 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:53 policy-pap | interceptor.classes = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.288190867Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:53 policy-apex-pdp | retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | internal.leave.group.on.close = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.289067711Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=876.354µs 23:16:53 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:53 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.294767355Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:53 policy-apex-pdp | sasl.jaas.config = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | isolation.level = read_uncommitted 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.295396355Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=628.29µs 23:16:53 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.303598106Z level=info msg="Executing migration" id="create user auth table" 23:16:53 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | max.partition.fetch.bytes = 1048576 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.304297166Z level=info msg="Migration successfully executed" id="create user auth table" duration=698.56µs 23:16:53 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:53 policy-pap | max.poll.interval.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.312477567Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:53 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | max.poll.records = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.314432636Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.949199ms 23:16:53 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:53 policy-pap | metadata.max.age.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.320908392Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:53 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | metric.reporters = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.320964403Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=55.681µs 23:16:53 policy-apex-pdp | sasl.login.class = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | metrics.num.samples = 2 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.328464544Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:53 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | metrics.recording.level = INFO 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.332175919Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.710145ms 23:16:53 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:53 policy-pap | metrics.sample.window.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.33764564Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:53 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.343476966Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.830866ms 23:16:53 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 policy-pap | receive.buffer.bytes = 65536 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.346617683Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:53 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | reconnect.backoff.max.ms = 1000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.3518719Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.254377ms 23:16:53 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | reconnect.backoff.ms = 50 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.361453352Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:53 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | request.timeout.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.367119736Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.667445ms 23:16:53 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:53 policy-pap | retry.backoff.ms = 100 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.373283188Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:53 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.client.callback.handler.class = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.374255732Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=973.664µs 23:16:53 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:53 policy-pap | sasl.jaas.config = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.379674642Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:53 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.387795432Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.11935ms 23:16:53 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.394920557Z level=info msg="Executing migration" id="create server_lock table" 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.395882152Z level=info msg="Migration successfully executed" id="create server_lock table" duration=962.395µs 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.service.name = null 23:16:53 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.401294342Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.403025087Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.731415ms 23:16:53 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.409105417Z level=info msg="Executing migration" id="create user auth token table" 23:16:53 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:53 policy-pap | sasl.login.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.410350656Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.246209ms 23:16:53 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:53 policy-pap | sasl.login.class = null 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.416317514Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:53 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:53 policy-pap | sasl.login.connect.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.417361419Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.043765ms 23:16:53 policy-pap | sasl.login.read.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:53 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.426261892Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:53 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-apex-pdp | security.providers = null 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.427944246Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.681604ms 23:16:53 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-apex-pdp | send.buffer.bytes = 131072 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.436398411Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:53 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.437444218Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.045057ms 23:16:53 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.443420016Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:53 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-apex-pdp | ssl.cipher.suites = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.452373078Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.956232ms 23:16:53 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:53 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.459842289Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:53 kafka | [2024-02-25 23:14:52,439] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:53 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:53 policy-pap | sasl.mechanism = GSSAPI 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.460835043Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=993.284µs 23:16:53 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | ssl.engine.factory.class = null 23:16:53 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.465725916Z level=info msg="Executing migration" id="create cache_data table" 23:16:53 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | ssl.key.password = null 23:16:53 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.46667199Z level=info msg="Migration successfully executed" id="create cache_data table" duration=944.884µs 23:16:53 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.471896127Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:53 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:53 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.47282895Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=933.813µs 23:16:53 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:53 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.478423164Z level=info msg="Executing migration" id="create short_url table v1" 23:16:53 policy-apex-pdp | ssl.keystore.key = null 23:16:53 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | ssl.keystore.location = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.479106733Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=683.179µs 23:16:53 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:53 policy-apex-pdp | ssl.keystore.password = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.485968785Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | ssl.keystore.type = JKS 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.486757067Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=788.072µs 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.493245012Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | ssl.provider = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.493381104Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=136.512µs 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | security.protocol = PLAINTEXT 23:16:53 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:53 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.499657408Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | security.providers = null 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.49976447Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=108.042µs 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | send.buffer.bytes = 131072 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 policy-apex-pdp | ssl.truststore.certificates = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.503048778Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | session.timeout.ms = 45000 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | ssl.truststore.location = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.503995862Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=946.564µs 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | ssl.truststore.password = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.511063777Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:53 kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | ssl.truststore.type = JKS 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.512391256Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.327209ms 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | ssl.cipher.suites = null 23:16:53 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:53 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.521727005Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | transactional.id = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.523056654Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.329219ms 23:16:53 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:53 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.528288072Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:53 policy-pap | ssl.engine.factory.class = null 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.528381013Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=89.731µs 23:16:53 policy-pap | ssl.key.password = null 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.693+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.531441008Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:53 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.532358622Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=917.514µs 23:16:53 policy-pap | ssl.keystore.certificate.chain = null 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.536410592Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:53 policy-pap | ssl.keystore.key = null 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902893710 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.537211894Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=801.502µs 23:16:53 policy-pap | ssl.keystore.location = null 23:16:53 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=39c8ecad-0633-4ba4-9ca4-00222bde67e2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.540868088Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:53 policy-pap | ssl.keystore.password = null 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.542138177Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.269549ms 23:16:53 policy-pap | ssl.keystore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.601964742Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:53 policy-pap | ssl.protocol = TLSv1.3 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.713+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.60451318Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=2.549438ms 23:16:53 policy-pap | ssl.provider = null 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.713+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.609662476Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:53 policy-pap | ssl.secure.random.implementation = null 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.613871368Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.206142ms 23:16:53 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.617665655Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:53 policy-pap | ssl.truststore.certificates = null 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.618405896Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=739.651µs 23:16:53 policy-pap | ssl.truststore.location = null 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.622870392Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:53 policy-pap | ssl.truststore.password = null 23:16:53 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.622968093Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=97.181µs 23:16:53 policy-pap | ssl.truststore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b53cde7a-481f-427a-882b-d5bcee52ac2a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.625512521Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:53 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b53cde7a-481f-427a-882b-d5bcee52ac2a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.626180621Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=667.72µs 23:16:53 policy-pap | 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.630086929Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:53 policy-pap | [2024-02-25T23:14:49.911+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.746+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.631368718Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.281019ms 23:16:53 policy-pap | [2024-02-25T23:14:49.912+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | [] 23:16:53 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.635965015Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:53 policy-pap | [2024-02-25T23:14:49.912+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902889910 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.749+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.637051082Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.085187ms 23:16:53 policy-pap | [2024-02-25T23:14:49.916+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-1, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Subscribed to topic(s): policy-pdp-pap 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0d0dd601-c190-45e7-b3e9-fc8e0be684d1","timestampMs":1708902893717,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.640947919Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:53 policy-pap | [2024-02-25T23:14:49.917+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.64101514Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=67.521µs 23:16:53 policy-pap | allow.auto.create.topics = true 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.921+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.645180662Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:53 policy-pap | auto.commit.interval.ms = 5000 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.922+00:00|INFO|ServiceManager|main] service manager starting 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.646202097Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.023855ms 23:16:53 policy-pap | auto.include.jmx.reporter = true 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.922+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:53 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.651416625Z level=info msg="Executing migration" id="create alert_instance table" 23:16:53 policy-pap | auto.offset.reset = latest 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.922+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.652342628Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=925.114µs 23:16:53 policy-pap | bootstrap.servers = [kafka:9092] 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.935+00:00|INFO|ServiceManager|main] service manager started 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.656132745Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:53 policy-pap | check.crcs = true 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.935+00:00|INFO|ServiceManager|main] service manager started 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.657417283Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.323269ms 23:16:53 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.936+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.662100723Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:53 policy-pap | client.id = consumer-policy-pap-2 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.662876794Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=775.761µs 23:16:53 policy-pap | client.rack = 23:16:53 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:53 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:53.935+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.666364545Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:53 policy-pap | connections.max.idle.ms = 540000 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.081+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.670479297Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.112062ms 23:16:53 policy-pap | default.api.timeout.ms = 60000 23:16:53 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.081+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 policy-pap | enable.auto.commit = true 23:16:53 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.673911528Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.083+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | exclude.internal.topics = true 23:16:53 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.674678309Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=766.641µs 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.090+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] (Re-)joining group 23:16:53 policy-db-migrator | 23:16:53 policy-pap | fetch.max.bytes = 52428800 23:16:53 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.088+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:53 policy-db-migrator | 23:16:53 policy-pap | fetch.max.wait.ms = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.679096814Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:53 policy-pap | fetch.min.bytes = 1 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.679824745Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=727.971µs 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Request joining group due to: need to re-join with the given member-id: consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:53 policy-pap | group.id = policy-pap 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.684800429Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.717583505Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=32.775875ms 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] (Re-)joining group 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:53 policy-pap | group.instance.id = null 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.615+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | heartbeat.interval.ms = 3000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.72276876Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | interceptor.classes = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.755085519Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=32.311039ms 23:16:53 policy-apex-pdp | [2024-02-25T23:14:54.617+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | internal.leave.group.on.close = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.766716131Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:53 policy-apex-pdp | [2024-02-25T23:14:56.179+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.5 - policyadmin [25/Feb/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.50.0" 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:53 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.768003191Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.29204ms 23:16:53 policy-apex-pdp | [2024-02-25T23:14:57.117+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7', protocol='range'} 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:53 policy-pap | isolation.level = read_uncommitted 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.779122414Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:53 policy-apex-pdp | [2024-02-25T23:14:57.127+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Finished assignment for group at generation 1: {consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7=Assignment(partitions=[policy-pdp-pap-0])} 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:53 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.781333087Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=2.210173ms 23:16:53 policy-apex-pdp | [2024-02-25T23:14:57.153+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7', protocol='range'} 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:53 policy-pap | max.partition.fetch.bytes = 1048576 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.790859599Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:53 policy-apex-pdp | [2024-02-25T23:14:57.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:53 policy-pap | max.poll.interval.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.800193847Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.338247ms 23:16:53 policy-apex-pdp | [2024-02-25T23:14:57.156+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Adding newly assigned partitions: policy-pdp-pap-0 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:53 policy-pap | max.poll.records = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.810314287Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:53 policy-apex-pdp | [2024-02-25T23:14:57.168+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Found no committed offset for partition policy-pdp-pap-0 23:16:53 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:53 policy-pap | metadata.max.age.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.816198524Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.882847ms 23:16:53 policy-apex-pdp | [2024-02-25T23:14:57.179+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:53 policy-pap | metric.reporters = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.832140319Z level=info msg="Executing migration" id="create alert_rule table" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.716+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:53 policy-pap | metrics.num.samples = 2 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.833619972Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.478653ms 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:53 policy-pap | metrics.recording.level = INFO 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.842073937Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.745+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:53 policy-pap | metrics.sample.window.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.843317915Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.242417ms 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:53 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.847383455Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.748+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:53 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:53 policy-pap | receive.buffer.bytes = 65536 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.848946138Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.561763ms 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.887+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:53 policy-pap | reconnect.backoff.max.ms = 1000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.853342034Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:53 policy-apex-pdp | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:53 kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:53 policy-pap | reconnect.backoff.ms = 50 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.854769524Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.42921ms 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.898+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:53 policy-pap | request.timeout.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.860908896Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.898+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:53 policy-pap | retry.backoff.ms = 100 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.861011107Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=101.781µs 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:53 policy-pap | sasl.client.callback.handler.class = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.865411162Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.900+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:53 policy-pap | sasl.jaas.config = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.872845392Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.43358ms 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.876413305Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.915+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:53 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.883053844Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.640769ms 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.kerberos.service.name = null 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.887851114Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.915+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.895177223Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.324849ms 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.922+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.900104946Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:53 policy-pap | sasl.login.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.901035739Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=930.803µs 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.922+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.class = null 23:16:53 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.904579052Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.938+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 policy-pap | sasl.login.connect.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.905649278Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.069196ms 23:16:53 policy-apex-pdp | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.read.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.910098483Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.941+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.916211744Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.110061ms 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.920571349Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.952+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:53 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.926628648Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.056419ms 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.93016113Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:53 policy-apex-pdp | [2024-02-25T23:15:13.952+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.931152095Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=990.225µs 23:16:53 policy-apex-pdp | [2024-02-25T23:15:14.000+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.935600581Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:53 policy-apex-pdp | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.mechanism = GSSAPI 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:18.941484378Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.882857ms 23:16:53 policy-apex-pdp | [2024-02-25T23:15:14.002+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.072205228Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:53 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:53 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.077407326Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.206698ms 23:16:53 policy-apex-pdp | [2024-02-25T23:15:14.009+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:53 kafka | [2024-02-25 23:14:52,647] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.080903791Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:53 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 kafka | [2024-02-25 23:14:52,647] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.080956512Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=53.361µs 23:16:53 policy-apex-pdp | [2024-02-25T23:15:14.011+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,648] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:53 policy-apex-pdp | [2024-02-25T23:15:56.091+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.5 - policyadmin [25/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10654 "-" "Prometheus/2.50.0" 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.088701619Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:52,651] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.089490463Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=789.415µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.097437933Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:53 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:53 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.098229978Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=792.035µs 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.103048609Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:53 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.103849563Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=803.724µs 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | security.protocol = PLAINTEXT 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.109801616Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | security.providers = null 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.109855837Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=54.391µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | send.buffer.bytes = 131072 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.114609236Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:53 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:53 policy-pap | session.timeout.ms = 45000 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.121397585Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.786969ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.138805152Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.146235253Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.436301ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | ssl.cipher.suites = null 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.152900918Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.158694957Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.793599ms 23:16:53 policy-db-migrator | 23:16:53 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.162964248Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:53 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:53 policy-pap | ssl.engine.factory.class = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.170116682Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.151614ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | ssl.key.password = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.179668621Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:53 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.187854276Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.184274ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | ssl.keystore.certificate.chain = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.194852898Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | ssl.keystore.key = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.194905829Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=53.391µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | ssl.keystore.location = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.19924913Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:53 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:53 policy-pap | ssl.keystore.password = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.200738029Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.483878ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | ssl.keystore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.210919641Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:53 policy-pap | ssl.protocol = TLSv1.3 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.218643386Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.721825ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | ssl.provider = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.226186499Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | ssl.secure.random.implementation = null 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.226364253Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=184.464µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:53 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.231843916Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:53 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:53 policy-pap | ssl.truststore.certificates = null 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.240309357Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.47843ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | ssl.truststore.location = null 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.249185325Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:53 policy-pap | ssl.truststore.password = null 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.251253374Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=2.069129ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | ssl.truststore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.258398039Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.265572495Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.179296ms 23:16:53 policy-db-migrator | 23:16:53 policy-pap | 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.270022299Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:53 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:53 policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.270554619Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=530.93µs 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.278036041Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:53 policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902889923 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.2790673Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.030489ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.289111071Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:50.289+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.297744064Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.633423ms 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:50.445+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.307104151Z level=info msg="Executing migration" id="create provenance_type table" 23:16:53 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:50.714+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@f287a4e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@3879feec, org.springframework.security.web.context.SecurityContextHolderFilter@ce0bbd5, org.springframework.security.web.header.HeaderWriterFilter@1f7557fe, org.springframework.security.web.authentication.logout.LogoutFilter@7120daa6, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5e198c40, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7c359808, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@16361e61, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@71d2261e, org.springframework.security.web.access.ExceptionTranslationFilter@4ac0d49, org.springframework.security.web.access.intercept.AuthorizationFilter@280c3dc0] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.307994697Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=900.246µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.629+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.318391595Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.734+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.319532616Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.140491ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.759+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.328919264Z level=info msg="Executing migration" id="create alert_image table" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.778+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.329540526Z level=info msg="Migration successfully executed" id="create alert_image table" duration=621.601µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | [2024-02-25T23:14:51.779+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.337958985Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:53 policy-pap | [2024-02-25T23:14:51.779+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.339087406Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.132631ms 23:16:53 policy-pap | [2024-02-25T23:14:51.780+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.344095011Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:53 policy-pap | [2024-02-25T23:14:51.780+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.344199203Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=105.242µs 23:16:53 policy-pap | [2024-02-25T23:14:51.781+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | [2024-02-25T23:14:51.781+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:53 kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.3477615Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:53 policy-db-migrator | 23:16:53 policy-pap | [2024-02-25T23:14:51.785+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bd340acf-32e5-46ed-9341-bc882164db21, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2a525f88 23:16:53 kafka | [2024-02-25 23:14:52,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.348512075Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=750.165µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | [2024-02-25T23:14:51.797+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bd340acf-32e5-46ed-9341-bc882164db21, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:53 kafka | [2024-02-25 23:14:52,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.353106761Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:53 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:53 policy-pap | [2024-02-25T23:14:51.797+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:53 kafka | [2024-02-25 23:14:52,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.354101681Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=994.63µs 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | allow.auto.create.topics = true 23:16:53 kafka | [2024-02-25 23:14:52,657] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.358445893Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:53 policy-pap | auto.commit.interval.ms = 5000 23:16:53 kafka | [2024-02-25 23:14:52,663] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.359181117Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | auto.include.jmx.reporter = true 23:16:53 kafka | [2024-02-25 23:14:52,664] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.364048538Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | auto.offset.reset = latest 23:16:53 kafka | [2024-02-25 23:14:52,664] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.364853284Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=804.736µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | bootstrap.servers = [kafka:9092] 23:16:53 kafka | [2024-02-25 23:14:52,664] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.370278847Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:53 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:53 policy-pap | check.crcs = true 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.37205819Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.784373ms 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:53 policy-pap | client.id = consumer-bd340acf-32e5-46ed-9341-bc882164db21-3 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.377075326Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | client.rack = 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.385325671Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.250166ms 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | connections.max.idle.ms = 540000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.389032541Z level=info msg="Executing migration" id="create library_element table v1" 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | default.api.timeout.ms = 60000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.389873508Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=838.677µs 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:53 policy-pap | enable.auto.commit = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.397114035Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | exclude.internal.topics = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.398060002Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=947.977µs 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:53 policy-pap | fetch.max.bytes = 52428800 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.401333025Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | fetch.max.wait.ms = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.402085579Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=752.314µs 23:16:53 kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | fetch.min.bytes = 1 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.405700577Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | group.id = bd340acf-32e5-46ed-9341-bc882164db21 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.407620384Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.919237ms 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:53 policy-pap | group.instance.id = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.414102376Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | heartbeat.interval.ms = 3000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.415252919Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.149772ms 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:53 policy-pap | interceptor.classes = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.419316825Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | internal.leave.group.on.close = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.419346535Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=32.95µs 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.433405392Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | isolation.level = read_uncommitted 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.433500493Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=97.261µs 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:53 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.488189109Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | max.partition.fetch.bytes = 1048576 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.488564136Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=378.187µs 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.495009048Z level=info msg="Executing migration" id="create data_keys table" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | max.poll.interval.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.495901354Z level=info msg="Migration successfully executed" id="create data_keys table" duration=896.867µs 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | max.poll.records = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.502206804Z level=info msg="Executing migration" id="create secrets table" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | metadata.max.age.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.502800125Z level=info msg="Migration successfully executed" id="create secrets table" duration=593.351µs 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:53 policy-pap | metric.reporters = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.506367933Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | metrics.num.samples = 2 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.555542644Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=49.164471ms 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:53 policy-pap | metrics.recording.level = INFO 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.563958643Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | metrics.sample.window.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.571910143Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.95831ms 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.576158964Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | receive.buffer.bytes = 65536 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.576311676Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=153.023µs 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:53 policy-pap | reconnect.backoff.max.ms = 1000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.579643869Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | reconnect.backoff.ms = 50 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.622504761Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=42.861332ms 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:53 policy-pap | request.timeout.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.628569766Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | retry.backoff.ms = 100 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.680319455Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=51.748799ms 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.client.callback.handler.class = null 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.jaas.config = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.684393143Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:53 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.684951533Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=553.16µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.688755835Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.service.name = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.689555629Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=798.964µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.695819769Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.696272437Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=457.798µs 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.login.callback.handler.class = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.701892064Z level=info msg="Executing migration" id="create permission table" 23:16:53 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.login.class = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.702725709Z level=info msg="Migration successfully executed" id="create permission table" duration=833.295µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.login.connect.timeout.ms = null 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.712148288Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.login.read.timeout.ms = null 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.712947333Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=799.115µs 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.717235704Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.718284864Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.04835ms 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:53 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.723446662Z level=info msg="Executing migration" id="create role table" 23:16:53 kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:53 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.724218587Z level=info msg="Migration successfully executed" id="create role table" duration=774.174µs 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:53 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:53 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.728769472Z level=info msg="Executing migration" id="add column display_name" 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:53 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.734502721Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.732429ms 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:53 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.738597118Z level=info msg="Executing migration" id="add column group_name" 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:53 policy-pap | sasl.mechanism = GSSAPI 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.746285114Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.687316ms 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.750792399Z level=info msg="Executing migration" id="add index role.org_id" 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.752036543Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.249364ms 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:53 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.755906826Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.757037618Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.130642ms 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.760904591Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.762045212Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.140451ms 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.769448132Z level=info msg="Executing migration" id="create team role table" 23:16:53 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.77140845Z level=info msg="Migration successfully executed" id="create team role table" duration=1.957568ms 23:16:53 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:53 kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.779094645Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:53 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.780259667Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.164422ms 23:16:53 policy-pap | security.protocol = PLAINTEXT 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.784564278Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:53 policy-pap | security.providers = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.785574918Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.01422ms 23:16:53 policy-pap | send.buffer.bytes = 131072 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.791301466Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:53 policy-pap | session.timeout.ms = 45000 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.792091791Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=790.265µs 23:16:53 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:53 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.795931903Z level=info msg="Executing migration" id="create user role table" 23:16:53 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.797366411Z level=info msg="Migration successfully executed" id="create user role table" duration=1.430058ms 23:16:53 policy-pap | ssl.cipher.suites = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.80207514Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:53 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.803942635Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.867615ms 23:16:53 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.811785354Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:53 policy-pap | ssl.engine.factory.class = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.812720821Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=935.997µs 23:16:53 policy-pap | ssl.key.password = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:53 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.819132683Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:53 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.819955108Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=823.305µs 23:16:53 policy-pap | ssl.keystore.certificate.chain = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.827309208Z level=info msg="Executing migration" id="create builtin role table" 23:16:53 policy-pap | ssl.keystore.key = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.82850371Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.195192ms 23:16:53 policy-pap | ssl.keystore.location = null 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.835497193Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:53 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:53 policy-pap | ssl.keystore.password = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.83854148Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=3.048657ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:53 policy-pap | ssl.keystore.type = JKS 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.843994924Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:53 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:53 policy-pap | ssl.protocol = TLSv1.3 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.845058453Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.063309ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:53 policy-pap | ssl.provider = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.851819371Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:53 policy-pap | ssl.secure.random.implementation = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.86332036Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.501598ms 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:53 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.95950866Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:53 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.certificates = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.961432227Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.928127ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.location = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.967231986Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:53 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.password = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.968303896Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.07417ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.type = JKS 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.973569926Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:53 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.975150965Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.580449ms 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:53 policy-pap | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.98169057Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:53 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.983441362Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.750422ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.989229102Z level=info msg="Executing migration" id="create seed assignment table" 23:16:53 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891805 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.990435785Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.206013ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Subscribed to topic(s): policy-pdp-pap 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.995192676Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.806+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:19.997245194Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=2.046198ms 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.806+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f430ca1f-0b14-4277-b999-dfdb1b16d100, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3f2ab6ec 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.004580493Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:53 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.806+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f430ca1f-0b14-4277-b999-dfdb1b16d100, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.012461133Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.8807ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.807+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.017422337Z level=info msg="Executing migration" id="permission kind migration" 23:16:53 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:53 policy-pap | allow.auto.create.topics = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.028238442Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.818585ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,721] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:53 policy-pap | auto.commit.interval.ms = 5000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.035330107Z level=info msg="Executing migration" id="permission attribute migration" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,721] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:53 policy-pap | auto.include.jmx.reporter = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.043786018Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.455081ms 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,780] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | auto.offset.reset = latest 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.049036058Z level=info msg="Executing migration" id="permission identifier migration" 23:16:53 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,796] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | bootstrap.servers = [kafka:9092] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.057527199Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.489501ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,799] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:53 policy-pap | check.crcs = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.06334981Z level=info msg="Executing migration" id="add permission identifier index" 23:16:53 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,800] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.064314518Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=964.078µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,802] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | client.id = consumer-policy-pap-4 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.068420787Z level=info msg="Executing migration" id="create query_history table v1" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,817] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | client.rack = 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.06966805Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.248824ms 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,818] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | connections.max.idle.ms = 540000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.076212015Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:53 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,818] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:53 policy-pap | default.api.timeout.ms = 60000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.077312405Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.1004ms 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,818] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | enable.auto.commit = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.084520491Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:53 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,818] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | exclude.internal.topics = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.084593343Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=73.362µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,831] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | fetch.max.bytes = 52428800 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.088556339Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,832] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | fetch.max.wait.ms = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.08859481Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=39.191µs 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,833] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:53 policy-pap | fetch.min.bytes = 1 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.092207448Z level=info msg="Executing migration" id="teams permissions migration" 23:16:53 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:53 kafka | [2024-02-25 23:14:52,833] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | group.id = policy-pap 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.092679917Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=472.629µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,833] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | group.instance.id = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.09965029Z level=info msg="Executing migration" id="dashboard permissions" 23:16:53 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,843] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | heartbeat.interval.ms = 3000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.100229411Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=579.561µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,844] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | interceptor.classes = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.105507271Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,845] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:53 policy-pap | internal.leave.group.on.close = true 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.106140643Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=633.392µs 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,845] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.115140584Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:53 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:53 kafka | [2024-02-25 23:14:52,845] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | isolation.level = read_uncommitted 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.115338307Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=197.943µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,856] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.121504095Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:53 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 kafka | [2024-02-25 23:14:52,857] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | max.partition.fetch.bytes = 1048576 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.121821281Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=317.546µs 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,858] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:53 policy-pap | max.poll.interval.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.126661473Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,858] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | max.poll.records = 500 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.127425968Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=763.654µs 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,858] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | metadata.max.age.ms = 300000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.132427943Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:53 kafka | [2024-02-25 23:14:52,868] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:53 policy-pap | metric.reporters = [] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.133797988Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.369085ms 23:16:53 kafka | [2024-02-25 23:14:52,869] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | metrics.num.samples = 2 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.139433266Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:53 kafka | [2024-02-25 23:14:52,869] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:53 policy-pap | metrics.recording.level = INFO 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.14863034Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.196065ms 23:16:53 kafka | [2024-02-25 23:14:52,870] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | metrics.sample.window.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.153712277Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:53 kafka | [2024-02-25 23:14:52,872] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.153830409Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=118.952µs 23:16:53 kafka | [2024-02-25 23:14:52,883] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | receive.buffer.bytes = 65536 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.157383426Z level=info msg="Executing migration" id="create correlation table v1" 23:16:53 kafka | [2024-02-25 23:14:52,884] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:53 policy-pap | reconnect.backoff.max.ms = 1000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.158351975Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=968.289µs 23:16:53 kafka | [2024-02-25 23:14:52,884] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | reconnect.backoff.ms = 50 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.162135887Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:53 kafka | [2024-02-25 23:14:52,884] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:53 policy-pap | request.timeout.ms = 30000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.16334173Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.203313ms 23:16:53 kafka | [2024-02-25 23:14:52,884] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | retry.backoff.ms = 100 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.173092505Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:53 kafka | [2024-02-25 23:14:52,894] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.client.callback.handler.class = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.176193134Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=3.106549ms 23:16:53 kafka | [2024-02-25 23:14:52,894] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.jaas.config = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.186167114Z level=info msg="Executing migration" id="add correlation config column" 23:16:53 kafka | [2024-02-25 23:14:52,895] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:53 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.201912722Z level=info msg="Migration successfully executed" id="add correlation config column" duration=15.755178ms 23:16:53 kafka | [2024-02-25 23:14:52,895] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.21544373Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,895] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:53 policy-pap | sasl.kerberos.service.name = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.217303225Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.862975ms 23:16:53 kafka | [2024-02-25 23:14:52,904] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.224651284Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,906] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.225509162Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=857.707µs 23:16:53 kafka | [2024-02-25 23:14:52,906] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.callback.handler.class = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.233073305Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:53 kafka | [2024-02-25 23:14:52,906] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:53 policy-pap | sasl.login.class = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.295867728Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=62.794093ms 23:16:53 kafka | [2024-02-25 23:14:52,906] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.connect.timeout.ms = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.30439624Z level=info msg="Executing migration" id="create correlation v2" 23:16:53 kafka | [2024-02-25 23:14:52,918] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:53 policy-pap | sasl.login.read.timeout.ms = null 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.306191295Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.799155ms 23:16:53 kafka | [2024-02-25 23:14:52,920] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.407879877Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:53 kafka | [2024-02-25 23:14:52,920] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.409895215Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.015168ms 23:16:53 kafka | [2024-02-25 23:14:52,920] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:52,920] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.414421081Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:53 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:52,931] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.415643975Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.224274ms 23:16:53 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:52,932] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.422310831Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:53 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:52,932] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.424250798Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.939407ms 23:16:53 policy-pap | sasl.mechanism = GSSAPI 23:16:53 kafka | [2024-02-25 23:14:52,932] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.432474414Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:53 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,932] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.432726379Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=252.625µs 23:16:53 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:52,940] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.436820366Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:53 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:53 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:53 kafka | [2024-02-25 23:14:52,941] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.440559498Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=3.737942ms 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:52,941] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.450327743Z level=info msg="Executing migration" id="add provisioning column" 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:53 kafka | [2024-02-25 23:14:52,941] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.459016878Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.685805ms 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.464840779Z level=info msg="Executing migration" id="create entity_events table" 23:16:53 kafka | [2024-02-25 23:14:52,942] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.465534302Z level=info msg="Migration successfully executed" id="create entity_events table" duration=696.953µs 23:16:53 kafka | [2024-02-25 23:14:52,949] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.470526127Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:53 kafka | [2024-02-25 23:14:52,950] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:53 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.472101597Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.57549ms 23:16:53 kafka | [2024-02-25 23:14:52,950] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:53 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.47908715Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,950] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | security.protocol = PLAINTEXT 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.479582729Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,950] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | security.providers = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.483126587Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,960] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | send.buffer.bytes = 131072 23:16:53 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.483597556Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,961] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | session.timeout.ms = 45000 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.487887226Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:53 kafka | [2024-02-25 23:14:52,961] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:53 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:53 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.488652852Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=765.406µs 23:16:53 kafka | [2024-02-25 23:14:52,961] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.495357519Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:53 kafka | [2024-02-25 23:14:52,961] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | ssl.cipher.suites = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.496691065Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.332466ms 23:16:53 kafka | [2024-02-25 23:14:52,970] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.505287298Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,972] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:53 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.506381198Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.09381ms 23:16:53 kafka | [2024-02-25 23:14:52,972] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:53 policy-pap | ssl.engine.factory.class = null 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.512201849Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:53 kafka | [2024-02-25 23:14:52,973] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | ssl.key.password = null 23:16:53 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.513358982Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.156172ms 23:16:53 kafka | [2024-02-25 23:14:52,973] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.520005427Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:53 kafka | [2024-02-25 23:14:52,987] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | ssl.keystore.certificate.chain = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.521051777Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.04591ms 23:16:53 kafka | [2024-02-25 23:14:52,988] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | ssl.keystore.key = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.527717994Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:53 kafka | [2024-02-25 23:14:52,988] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:53 policy-pap | ssl.keystore.location = null 23:16:53 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.529456256Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.746232ms 23:16:53 kafka | [2024-02-25 23:14:52,988] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | ssl.keystore.password = null 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.5369921Z level=info msg="Executing migration" id="Drop public config table" 23:16:53 kafka | [2024-02-25 23:14:52,988] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | ssl.keystore.type = JKS 23:16:53 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.538996878Z level=info msg="Migration successfully executed" id="Drop public config table" duration=2.004178ms 23:16:53 kafka | [2024-02-25 23:14:52,999] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | ssl.protocol = TLSv1.3 23:16:53 policy-db-migrator | JOIN pdpstatistics b 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.547114243Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:53 kafka | [2024-02-25 23:14:52,999] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | ssl.provider = null 23:16:53 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.548181443Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.067269ms 23:16:53 kafka | [2024-02-25 23:14:52,999] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:53 policy-pap | ssl.secure.random.implementation = null 23:16:53 policy-db-migrator | SET a.id = b.id 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.555058483Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:53 kafka | [2024-02-25 23:14:52,999] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.556515891Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.457988ms 23:16:53 kafka | [2024-02-25 23:14:52,999] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | ssl.truststore.certificates = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.560618549Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:53 kafka | [2024-02-25 23:14:53,008] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | ssl.truststore.location = null 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.561477676Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=859.117µs 23:16:53 kafka | [2024-02-25 23:14:53,009] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | ssl.truststore.password = null 23:16:53 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.567754505Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:53 kafka | [2024-02-25 23:14:53,009] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:53 policy-pap | ssl.truststore.type = JKS 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.568897646Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.143991ms 23:16:53 kafka | [2024-02-25 23:14:53,009] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:53 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.573497084Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:53 kafka | [2024-02-25 23:14:53,009] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.610991596Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=37.494692ms 23:16:53 kafka | [2024-02-25 23:14:53,016] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | [2024-02-25T23:14:51.811+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.618272944Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:53 kafka | [2024-02-25 23:14:53,017] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | [2024-02-25T23:14:51.811+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.62489114Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.617146ms 23:16:53 kafka | [2024-02-25 23:14:53,017] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:53 policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891811 23:16:53 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.633751399Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:53 kafka | [2024-02-25 23:14:53,017] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.642695899Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.94427ms 23:16:53 policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:53 kafka | [2024-02-25 23:14:53,017] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.649581319Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:53 kafka | [2024-02-25 23:14:53,027] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f430ca1f-0b14-4277-b999-dfdb1b16d100, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.650055948Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=480.289µs 23:16:53 kafka | [2024-02-25 23:14:53,030] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | [2024-02-25T23:14:51.813+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bd340acf-32e5-46ed-9341-bc882164db21, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.656285527Z level=info msg="Executing migration" id="add share column" 23:16:53 kafka | [2024-02-25 23:14:53,030] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | [2024-02-25T23:14:51.813+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00b26071-c70f-48c7-b06a-57ab45326f51, alive=false, publisher=null]]: starting 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.666129834Z level=info msg="Migration successfully executed" id="add share column" duration=9.841887ms 23:16:53 kafka | [2024-02-25 23:14:53,030] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:53 policy-pap | [2024-02-25T23:14:51.846+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:53 kafka | [2024-02-25 23:14:53,031] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.670570939Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | acks = -1 23:16:53 kafka | [2024-02-25 23:14:53,039] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.670759672Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=187.993µs 23:16:53 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:53 policy-pap | auto.include.jmx.reporter = true 23:16:53 kafka | [2024-02-25 23:14:53,039] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.678423137Z level=info msg="Executing migration" id="create file table" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | batch.size = 16384 23:16:53 kafka | [2024-02-25 23:14:53,039] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.680486047Z level=info msg="Migration successfully executed" id="create file table" duration=2.06941ms 23:16:53 policy-db-migrator | 23:16:53 policy-pap | bootstrap.servers = [kafka:9092] 23:16:53 kafka | [2024-02-25 23:14:53,039] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.684938521Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | buffer.memory = 33554432 23:16:53 kafka | [2024-02-25 23:14:53,040] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.686219495Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.280854ms 23:16:53 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:53 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:53 kafka | [2024-02-25 23:14:53,048] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.690239382Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | client.id = producer-1 23:16:53 kafka | [2024-02-25 23:14:53,049] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.691533056Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.218003ms 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:53 policy-pap | compression.type = none 23:16:53 kafka | [2024-02-25 23:14:53,049] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.69908961Z level=info msg="Executing migration" id="create file_meta table" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | connections.max.idle.ms = 540000 23:16:53 kafka | [2024-02-25 23:14:53,049] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.699853725Z level=info msg="Migration successfully executed" id="create file_meta table" duration=764.145µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | delivery.timeout.ms = 120000 23:16:53 kafka | [2024-02-25 23:14:53,049] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.706590653Z level=info msg="Executing migration" id="file table idx: path key" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | enable.idempotence = true 23:16:53 kafka | [2024-02-25 23:14:53,060] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.707875717Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.284544ms 23:16:53 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:53 policy-pap | interceptor.classes = [] 23:16:53 kafka | [2024-02-25 23:14:53,060] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.713442233Z level=info msg="Executing migration" id="set path collation in file table" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:53 kafka | [2024-02-25 23:14:53,060] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.713538745Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=100.632µs 23:16:53 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:53 policy-pap | linger.ms = 0 23:16:53 kafka | [2024-02-25 23:14:53,060] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.72169838Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | max.block.ms = 60000 23:16:53 kafka | [2024-02-25 23:14:53,060] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.721779151Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=81.531µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | max.in.flight.requests.per.connection = 5 23:16:53 kafka | [2024-02-25 23:14:53,068] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.726586843Z level=info msg="Executing migration" id="managed permissions migration" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | max.request.size = 1048576 23:16:53 kafka | [2024-02-25 23:14:53,069] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.727158764Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=568.501µs 23:16:53 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:53 policy-pap | metadata.max.age.ms = 300000 23:16:53 kafka | [2024-02-25 23:14:53,069] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.732041727Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | metadata.max.idle.ms = 300000 23:16:53 kafka | [2024-02-25 23:14:53,069] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.73224387Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=202.123µs 23:16:53 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:53 policy-pap | metric.reporters = [] 23:16:53 kafka | [2024-02-25 23:14:53,069] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.737070362Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | metrics.num.samples = 2 23:16:53 kafka | [2024-02-25 23:14:53,077] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.73799307Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=922.708µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | metrics.recording.level = INFO 23:16:53 kafka | [2024-02-25 23:14:53,077] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.74327206Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | metrics.sample.window.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:53,077] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.752413793Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.129373ms 23:16:53 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:53 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:53 kafka | [2024-02-25 23:14:53,077] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.819111581Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | partitioner.availability.timeout.ms = 0 23:16:53 kafka | [2024-02-25 23:14:53,078] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.819620241Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=516.59µs 23:16:53 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:53 policy-pap | partitioner.class = null 23:16:53 kafka | [2024-02-25 23:14:53,086] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.827619793Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | partitioner.ignore.keys = false 23:16:53 kafka | [2024-02-25 23:14:53,086] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.828763354Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.146051ms 23:16:53 policy-db-migrator | 23:16:53 policy-pap | receive.buffer.bytes = 32768 23:16:53 kafka | [2024-02-25 23:14:53,086] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.8337599Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | reconnect.backoff.max.ms = 1000 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.834237648Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=478.809µs 23:16:53 kafka | [2024-02-25 23:14:53,086] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:53 policy-pap | reconnect.backoff.ms = 50 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.838162402Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:53 kafka | [2024-02-25 23:14:53,086] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | request.timeout.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:53,094] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:53 policy-pap | retries = 2147483647 23:16:53 kafka | [2024-02-25 23:14:53,095] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:53,095] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.83852405Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=361.768µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.client.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:53,095] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.84330885Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.jaas.config = null 23:16:53 kafka | [2024-02-25 23:14:53,095] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.843996154Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=684.554µs 23:16:53 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:53 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 kafka | [2024-02-25 23:14:53,107] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.848671452Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 kafka | [2024-02-25 23:14:53,107] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.858068182Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.39606ms 23:16:53 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:53 policy-pap | sasl.kerberos.service.name = null 23:16:53 kafka | [2024-02-25 23:14:53,107] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.863627737Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:53,107] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.872793351Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.164954ms 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:53,108] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.878417998Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:53,115] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.879232613Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=814.835µs 23:16:53 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:53 policy-pap | sasl.login.class = null 23:16:53 kafka | [2024-02-25 23:14:53,115] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.885908911Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.connect.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:53,115] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:20.998639513Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=112.731702ms 23:16:53 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:53 policy-pap | sasl.login.read.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:53,115] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.004100926Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:53 kafka | [2024-02-25 23:14:53,115] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.005318449Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.217263ms 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:53 kafka | [2024-02-25 23:14:53,121] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.009993078Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:53,122] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.011256961Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.263753ms 23:16:53 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:53 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:53,122] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.017358896Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:53,122] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.055443927Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=38.081811ms 23:16:53 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:53 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:53,122] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.061332428Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.mechanism = GSSAPI 23:16:53 kafka | [2024-02-25 23:14:53,130] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.061596443Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=260.615µs 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 kafka | [2024-02-25 23:14:53,130] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.066909114Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:53 kafka | [2024-02-25 23:14:53,130] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.067413373Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=506.699µs 23:16:53 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:53 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:53 kafka | [2024-02-25 23:14:53,130] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.075903634Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 kafka | [2024-02-25 23:14:53,130] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.076291082Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=387.668µs 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:53,137] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.080271287Z level=info msg="Executing migration" id="create folder table" 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:53,138] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.081408998Z level=info msg="Migration successfully executed" id="create folder table" duration=1.138241ms 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:53,138] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.08626766Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:53 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:53 kafka | [2024-02-25 23:14:53,138] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.087452812Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.184622ms 23:16:53 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:53 kafka | [2024-02-25 23:14:53,138] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.093695131Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:53 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:53,145] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.094900773Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.205252ms 23:16:53 policy-pap | security.protocol = PLAINTEXT 23:16:53 kafka | [2024-02-25 23:14:53,146] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.099232806Z level=info msg="Executing migration" id="Update folder title length" 23:16:53 policy-pap | security.providers = null 23:16:53 kafka | [2024-02-25 23:14:53,146] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.099275796Z level=info msg="Migration successfully executed" id="Update folder title length" duration=44.51µs 23:16:53 policy-pap | send.buffer.bytes = 131072 23:16:53 kafka | [2024-02-25 23:14:53,146] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.1063779Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:53 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:53,146] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.108255856Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.878026ms 23:16:53 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:53,153] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.113617238Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:53 policy-pap | ssl.cipher.suites = null 23:16:53 kafka | [2024-02-25 23:14:53,154] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.115390051Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.772763ms 23:16:53 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 kafka | [2024-02-25 23:14:53,154] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.119250694Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:53 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:53 kafka | [2024-02-25 23:14:53,154] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.120381595Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.130551ms 23:16:53 policy-pap | ssl.engine.factory.class = null 23:16:53 kafka | [2024-02-25 23:14:53,154] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.128977998Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:53 policy-pap | ssl.key.password = null 23:16:53 kafka | [2024-02-25 23:14:53,164] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.129540059Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=558.641µs 23:16:53 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:53 kafka | [2024-02-25 23:14:53,164] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:53 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.135491761Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:53 policy-pap | ssl.keystore.certificate.chain = null 23:16:53 kafka | [2024-02-25 23:14:53,165] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.135807547Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=313.856µs 23:16:53 policy-pap | ssl.keystore.key = null 23:16:53 kafka | [2024-02-25 23:14:53,165] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.139548418Z level=info msg="Executing migration" id="create anon_device table" 23:16:53 policy-pap | ssl.keystore.location = null 23:16:53 kafka | [2024-02-25 23:14:53,165] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(9kyEG5R7S_ymSJoFuQGdeg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.140418904Z level=info msg="Migration successfully executed" id="create anon_device table" duration=870.626µs 23:16:53 policy-pap | ssl.keystore.password = null 23:16:53 kafka | [2024-02-25 23:14:53,173] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.146922068Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:53 policy-pap | ssl.keystore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:53,174] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.14811796Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.195552ms 23:16:53 policy-pap | ssl.protocol = TLSv1.3 23:16:53 kafka | [2024-02-25 23:14:53,174] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.153673165Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:53 policy-pap | ssl.provider = null 23:16:53 kafka | [2024-02-25 23:14:53,174] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.155412018Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.738323ms 23:16:53 policy-pap | ssl.secure.random.implementation = null 23:16:53 kafka | [2024-02-25 23:14:53,174] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.159614987Z level=info msg="Executing migration" id="create signing_key table" 23:16:53 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:53 kafka | [2024-02-25 23:14:53,180] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.160404323Z level=info msg="Migration successfully executed" id="create signing_key table" duration=788.646µs 23:16:53 policy-pap | ssl.truststore.certificates = null 23:16:53 kafka | [2024-02-25 23:14:53,181] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.164976719Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:53 policy-pap | ssl.truststore.location = null 23:16:53 kafka | [2024-02-25 23:14:53,181] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.166170262Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.193573ms 23:16:53 policy-pap | ssl.truststore.password = null 23:16:53 kafka | [2024-02-25 23:14:53,181] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.173118663Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:53 policy-pap | ssl.truststore.type = JKS 23:16:53 kafka | [2024-02-25 23:14:53,181] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.174608291Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.489458ms 23:16:53 policy-pap | transaction.timeout.ms = 60000 23:16:53 kafka | [2024-02-25 23:14:53,190] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.240830664Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:53 policy-pap | transactional.id = null 23:16:53 kafka | [2024-02-25 23:14:53,191] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.241524756Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=695.112µs 23:16:53 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:53 kafka | [2024-02-25 23:14:53,191] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.250489327Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:53 policy-pap | 23:16:53 kafka | [2024-02-25 23:14:53,191] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.2643804Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.892023ms 23:16:53 policy-pap | [2024-02-25T23:14:51.861+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:53 kafka | [2024-02-25 23:14:53,191] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.268799982Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:53 policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 kafka | [2024-02-25 23:14:53,197] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.269445424Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=646.422µs 23:16:53 policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 kafka | [2024-02-25 23:14:53,198] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.272634046Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:53 policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891878 23:16:53 kafka | [2024-02-25 23:14:53,198] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | msg 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.274043182Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.408956ms 23:16:53 policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00b26071-c70f-48c7-b06a-57ab45326f51, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:53 kafka | [2024-02-25 23:14:53,198] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | upgrade to 1100 completed 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.279866992Z level=info msg="Executing migration" id="create sso_setting table" 23:16:53 policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e8b20985-7a16-4249-8f92-c0d245467f15, alive=false, publisher=null]]: starting 23:16:53 kafka | [2024-02-25 23:14:53,198] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.281297729Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.430157ms 23:16:53 policy-pap | [2024-02-25T23:14:51.879+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:53 kafka | [2024-02-25 23:14:53,209] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.289169398Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:53 policy-pap | acks = -1 23:16:53 kafka | [2024-02-25 23:14:53,210] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.290114106Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=945.388µs 23:16:53 policy-pap | auto.include.jmx.reporter = true 23:16:53 kafka | [2024-02-25 23:14:53,210] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.295823914Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:53 policy-pap | batch.size = 16384 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:53,210] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.296214671Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=391.187µs 23:16:53 policy-pap | bootstrap.servers = [kafka:9092] 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:53,210] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 grafana | logger=migrator t=2024-02-25T23:14:21.300090894Z level=info msg="migrations completed" performed=526 skipped=0 duration=5.670914538s 23:16:53 policy-pap | buffer.memory = 33554432 23:16:53 policy-db-migrator | 23:16:53 kafka | [2024-02-25 23:14:53,216] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 grafana | logger=sqlstore t=2024-02-25T23:14:21.312978378Z level=info msg="Created default admin" user=admin 23:16:53 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:53 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:53 kafka | [2024-02-25 23:14:53,217] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 grafana | logger=sqlstore t=2024-02-25T23:14:21.313294464Z level=info msg="Created default organization" 23:16:53 policy-pap | client.id = producer-2 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:53,217] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:53 grafana | logger=secrets t=2024-02-25T23:14:21.318635165Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:53 policy-pap | compression.type = none 23:16:53 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:53 kafka | [2024-02-25 23:14:53,217] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 grafana | logger=plugin.store t=2024-02-25T23:14:21.337100015Z level=info msg="Loading plugins..." 23:16:53 policy-pap | connections.max.idle.ms = 540000 23:16:53 policy-db-migrator | -------------- 23:16:53 kafka | [2024-02-25 23:14:53,217] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | delivery.timeout.ms = 120000 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=local.finder t=2024-02-25T23:14:21.378954386Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:53 kafka | [2024-02-25 23:14:53,224] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | enable.idempotence = true 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=plugin.store t=2024-02-25T23:14:21.379020628Z level=info msg="Plugins loaded" count=55 duration=41.921913ms 23:16:53 kafka | [2024-02-25 23:14:53,224] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | interceptor.classes = [] 23:16:53 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:53 grafana | logger=query_data t=2024-02-25T23:14:21.389820011Z level=info msg="Query Service initialization" 23:16:53 kafka | [2024-02-25 23:14:53,224] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:53 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=live.push_http t=2024-02-25T23:14:21.397047458Z level=info msg="Live Push Gateway initialization" 23:16:53 kafka | [2024-02-25 23:14:53,224] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | linger.ms = 0 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=ngalert.migration t=2024-02-25T23:14:21.402848078Z level=info msg=Starting 23:16:53 kafka | [2024-02-25 23:14:53,224] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | max.block.ms = 60000 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=ngalert.migration orgID=1 t=2024-02-25T23:14:21.40347573Z level=info msg="Migrating alerts for organisation" 23:16:53 kafka | [2024-02-25 23:14:53,231] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | max.in.flight.requests.per.connection = 5 23:16:53 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:53 grafana | logger=ngalert.migration orgID=1 t=2024-02-25T23:14:21.404086612Z level=info msg="Alerts found to migrate" alerts=0 23:16:53 kafka | [2024-02-25 23:14:53,232] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | max.request.size = 1048576 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-25T23:14:21.405514328Z level=info msg="Completed legacy migration" 23:16:53 kafka | [2024-02-25 23:14:53,232] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:53 policy-pap | metadata.max.age.ms = 300000 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:53 grafana | logger=infra.usagestats.collector t=2024-02-25T23:14:21.443401256Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:53 kafka | [2024-02-25 23:14:53,232] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | metadata.max.idle.ms = 300000 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=provisioning.datasources t=2024-02-25T23:14:21.445925352Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:53 kafka | [2024-02-25 23:14:53,232] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | metric.reporters = [] 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=provisioning.alerting t=2024-02-25T23:14:21.461413096Z level=info msg="starting to provision alerting" 23:16:53 kafka | [2024-02-25 23:14:53,239] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | metrics.num.samples = 2 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=provisioning.alerting t=2024-02-25T23:14:21.461438246Z level=info msg="finished to provision alerting" 23:16:53 kafka | [2024-02-25 23:14:53,239] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | metrics.recording.level = INFO 23:16:53 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:53 grafana | logger=ngalert.state.manager t=2024-02-25T23:14:21.46164282Z level=info msg="Warming state cache for startup" 23:16:53 kafka | [2024-02-25 23:14:53,239] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:53 policy-pap | metrics.sample.window.ms = 30000 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=ngalert.state.manager t=2024-02-25T23:14:21.462309583Z level=info msg="State cache has been initialized" states=0 duration=666.032µs 23:16:53 kafka | [2024-02-25 23:14:53,239] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=ngalert.scheduler t=2024-02-25T23:14:21.462500926Z level=info msg="Starting scheduler" tickInterval=10s 23:16:53 kafka | [2024-02-25 23:14:53,240] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-pap | partitioner.availability.timeout.ms = 0 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=ticker t=2024-02-25T23:14:21.462593298Z level=info msg=starting first_tick=2024-02-25T23:14:30Z 23:16:53 kafka | [2024-02-25 23:14:53,247] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-pap | partitioner.class = null 23:16:53 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:53 grafana | logger=grafanaStorageLogger t=2024-02-25T23:14:21.463272331Z level=info msg="Storage starting" 23:16:53 kafka | [2024-02-25 23:14:53,247] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-pap | partitioner.ignore.keys = false 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-25T23:14:21.463511616Z level=info msg="Starting MultiOrg Alertmanager" 23:16:53 kafka | [2024-02-25 23:14:53,247] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:53 grafana | logger=http.server t=2024-02-25T23:14:21.466120335Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:53 policy-pap | receive.buffer.bytes = 32768 23:16:53 kafka | [2024-02-25 23:14:53,247] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=grafana-apiserver t=2024-02-25T23:14:21.480813203Z level=info msg="Authentication is disabled" 23:16:53 policy-pap | reconnect.backoff.max.ms = 1000 23:16:53 kafka | [2024-02-25 23:14:53,248] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=grafana-apiserver t=2024-02-25T23:14:21.484508873Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:53 policy-pap | reconnect.backoff.ms = 50 23:16:53 kafka | [2024-02-25 23:14:53,258] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=sqlstore.transactions t=2024-02-25T23:14:21.582449155Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:53 policy-pap | request.timeout.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:53,260] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:53 grafana | logger=plugins.update.checker t=2024-02-25T23:14:21.608168871Z level=info msg="Update check succeeded" duration=144.635805ms 23:16:53 policy-pap | retries = 2147483647 23:16:53 kafka | [2024-02-25 23:14:53,260] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=sqlstore.transactions t=2024-02-25T23:14:21.683321743Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:53 policy-pap | retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:53,260] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 grafana | logger=sqlstore.transactions t=2024-02-25T23:14:21.695602576Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:53 policy-pap | sasl.client.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:53,261] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 grafana | logger=grafana.update.checker t=2024-02-25T23:14:21.888582945Z level=info msg="Update check succeeded" duration=423.923028ms 23:16:53 policy-pap | sasl.jaas.config = null 23:16:53 kafka | [2024-02-25 23:14:53,269] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | TRUNCATE TABLE sequence 23:16:53 grafana | logger=infra.usagestats t=2024-02-25T23:15:12.47581418Z level=info msg="Usage stats are ready to report" 23:16:53 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:53 kafka | [2024-02-25 23:14:53,269] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:53 kafka | [2024-02-25 23:14:53,269] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.kerberos.service.name = null 23:16:53 kafka | [2024-02-25 23:14:53,270] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:53,270] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:53 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:53,279] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.callback.handler.class = null 23:16:53 kafka | [2024-02-25 23:14:53,279] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:53 policy-pap | sasl.login.class = null 23:16:53 kafka | [2024-02-25 23:14:53,279] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.connect.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:53,279] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.read.timeout.ms = null 23:16:53 kafka | [2024-02-25 23:14:53,279] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:53 kafka | [2024-02-25 23:14:53,286] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | DROP TABLE pdpstatistics 23:16:53 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:53 kafka | [2024-02-25 23:14:53,287] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:53 kafka | [2024-02-25 23:14:53,287] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:53 kafka | [2024-02-25 23:14:53,287] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:53,287] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:53 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:53,295] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.mechanism = GSSAPI 23:16:53 kafka | [2024-02-25 23:14:53,295] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:53 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:53 kafka | [2024-02-25 23:14:53,295] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:53 kafka | [2024-02-25 23:14:53,295] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:53 kafka | [2024-02-25 23:14:53,295] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:53 kafka | [2024-02-25 23:14:53,302] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:53 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:53,303] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:53 kafka | [2024-02-25 23:14:53,303] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | DROP TABLE statistics_sequence 23:16:53 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:53,303] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:53 policy-db-migrator | -------------- 23:16:53 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:53 kafka | [2024-02-25 23:14:53,303] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:53 policy-db-migrator | 23:16:53 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:53 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:53 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:53 policy-db-migrator | name version 23:16:53 policy-pap | security.protocol = PLAINTEXT 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:53 policy-db-migrator | policyadmin 1300 23:16:53 policy-pap | security.providers = null 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:53 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:53 policy-pap | send.buffer.bytes = 131072 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:53 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:19 23:16:53 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:53 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:19 23:16:53 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:53 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:19 23:16:53 policy-pap | ssl.cipher.suites = null 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:53 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:53 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:53 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 policy-pap | ssl.engine.factory.class = null 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:53 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 policy-pap | ssl.key.password = null 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:53 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:53 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:53 policy-pap | ssl.keystore.certificate.chain = null 23:16:53 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:53 policy-pap | ssl.keystore.key = null 23:16:53 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:53 policy-pap | ssl.keystore.location = null 23:16:53 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:53 policy-pap | ssl.keystore.password = null 23:16:53 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:53 policy-pap | ssl.keystore.type = JKS 23:16:53 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:53 policy-pap | ssl.protocol = TLSv1.3 23:16:53 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:53 policy-pap | ssl.provider = null 23:16:53 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:53 policy-pap | ssl.secure.random.implementation = null 23:16:53 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:53 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:53 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.certificates = null 23:16:53 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.location = null 23:16:53 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.password = null 23:16:53 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:53 policy-pap | ssl.truststore.type = JKS 23:16:53 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:53 policy-pap | transaction.timeout.ms = 60000 23:16:53 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:53 policy-pap | transactional.id = null 23:16:53 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:53 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:53 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:53 policy-pap | 23:16:53 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.879+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:53 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:53 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:53 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891883 23:16:53 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e8b20985-7a16-4249-8f92-c0d245467f15, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:53 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:53 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:53 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.887+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:53 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.887+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:53 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:53 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:53 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:53 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:53 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.890+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:53 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.891+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:53 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.892+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.355 seconds (process running for 12.105) 23:16:53 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:51.894+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:53 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:52.363+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:53 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:14:52.364+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ 23:16:53 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 policy-pap | [2024-02-25T23:14:52.365+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ 23:16:53 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:53 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 23:16:53 policy-pap | [2024-02-25T23:14:52.365+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ 23:16:53 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:53 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.406+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 kafka | [2024-02-25 23:14:53,325] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.406+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ 23:16:53 kafka | [2024-02-25 23:14:53,326] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.468+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.481+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.483+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.557+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.617+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 policy-pap | [2024-02-25T23:14:52.664+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:52.724+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:52.770+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:52.831+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:52.881+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:52.937+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:52.990+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.045+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:53.096+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.151+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-pap | [2024-02-25T23:14:53.203+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:53.262+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.308+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:53 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:53.379+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:53 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.389+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] (Re-)joining group 23:16:53 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:53.417+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:53 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 23:16:53 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.418+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Request joining group due to: need to re-join with the given member-id: consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 23:16:53 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:53.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:53 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] (Re-)joining group 23:16:53 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:53.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:53 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a 23:16:53 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:53.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:53 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:53.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:53 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:56.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Successfully joined group with generation Generation{generationId=1, memberId='consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284', protocol='range'} 23:16:53 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:56.459+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a', protocol='range'} 23:16:53 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:56.466+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a=Assignment(partitions=[policy-pdp-pap-0])} 23:16:53 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:56.466+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Finished assignment for group at generation 1: {consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284=Assignment(partitions=[policy-pdp-pap-0])} 23:16:53 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:56.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a', protocol='range'} 23:16:53 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:56.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Successfully synced group in generation Generation{generationId=1, memberId='consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284', protocol='range'} 23:16:53 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 policy-pap | [2024-02-25T23:14:56.501+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 policy-pap | [2024-02-25T23:14:56.501+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 policy-pap | [2024-02-25T23:14:56.506+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Adding newly assigned partitions: policy-pdp-pap-0 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 policy-pap | [2024-02-25T23:14:56.506+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 policy-pap | [2024-02-25T23:14:56.527+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 policy-pap | [2024-02-25T23:14:56.527+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Found no committed offset for partition policy-pdp-pap-0 23:16:53 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:14:56.545+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:53 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:14:56.545+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:53 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:01.611+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:53 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:01.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 23:16:53 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:01.613+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 2 ms 23:16:53 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.756+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:16:53 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [] 23:16:53 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.757+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.757+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.769+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:53 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.844+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting 23:16:53 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.844+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting listener 23:16:53 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.845+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting timer 23:16:53 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.845+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] 23:16:53 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.847+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] 23:16:53 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.847+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting enqueue 23:16:53 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.848+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate started 23:16:53 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.848+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.889+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.890+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:53 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.891+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.891+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:53 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.910+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.912+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 policy-pap | [2024-02-25T23:15:13.918+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 23:16:53 policy-pap | [2024-02-25T23:15:13.919+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2502242314191100u 1 2024-02-25 23:14:25 23:16:53 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 23:16:53 policy-pap | [2024-02-25T23:15:13.919+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 23:16:53 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping enqueue 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 23:16:53 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping timer 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 23:16:53 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2502242314191300u 1 2024-02-25 23:14:26 23:16:53 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping listener 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2502242314191300u 1 2024-02-25 23:14:26 23:16:53 policy-pap | [2024-02-25T23:15:13.921+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopped 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2502242314191300u 1 2024-02-25 23:14:26 23:16:53 policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate successful 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-db-migrator | policyadmin: OK @ 1300 23:16:53 policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c start publishing next request 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting listener 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting timer 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] 23:16:53 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting enqueue 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange started 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.929+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.963+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.963+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.967+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:53 policy-pap | [2024-02-25T23:15:13.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping 23:16:53 kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping enqueue 23:16:53 kafka | [2024-02-25 23:14:53,334] INFO [Broker id=1] Finished LeaderAndIsr request in 673ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:13.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping timer 23:16:53 kafka | [2024-02-25 23:14:53,338] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.986+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] 23:16:53 kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping listener 23:16:53 kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopped 23:16:53 kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange successful 23:16:53 kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c start publishing next request 23:16:53 kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting listener 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting timer 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=a77fa683-80f4-4771-a123-a237db6bdd66, expireMs=1708902943988] 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting enqueue 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.989+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate started 23:16:53 policy-pap | [2024-02-25T23:15:13.989+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,340] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=19qiw_gSQSuGAZ9hqdP69g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=9kyEG5R7S_ymSJoFuQGdeg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:13.992+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.992+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8017ad77-05f8-444a-aa06-a451f278f050 23:16:53 policy-pap | [2024-02-25T23:15:13.995+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.996+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.996+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:13.996+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cfeeaf9a-8c54-4457-9343-75107d5ce4da 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.001+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.001+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:53 kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.003+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.004+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.008+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.009+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a77fa683-80f4-4771-a123-a237db6bdd66 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.011+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:53 policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping enqueue 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping timer 23:16:53 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a77fa683-80f4-4771-a123-a237db6bdd66, expireMs=1708902943988] 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping listener 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.013+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopped 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.019+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate successful 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:14.019+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c has no more requests 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:22.311+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:22.318+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:22.762+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:23.380+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:23.380+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:23.937+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:24.219+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:53 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:24.328+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:53 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:24.328+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:24.329+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:24.344+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-25T23:15:24Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-25T23:15:24Z, user=policyadmin)] 23:16:53 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:53 policy-pap | [2024-02-25T23:15:25.068+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.069+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.069+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.069+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.070+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.085+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-25T23:15:25Z, user=policyadmin)] 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.473+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.473+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:53 policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:25.485+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-25T23:15:25Z, user=policyadmin)] 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:43.845+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:43.929+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:46.100+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:15:46.102+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 policy-pap | [2024-02-25T23:16:51.891+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,350] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,350] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:53 kafka | [2024-02-25 23:14:53,412] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group bd340acf-32e5-46ed-9341-bc882164db21 in Empty state. Created a new member id consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:53,424] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:53,431] INFO [GroupCoordinator 1]: Preparing to rebalance group bd340acf-32e5-46ed-9341-bc882164db21 in state PreparingRebalance with old generation 0 (__consumer_offsets-9) (reason: Adding new member consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:53,435] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:54,107] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b53cde7a-481f-427a-882b-d5bcee52ac2a in Empty state. Created a new member id consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:54,111] INFO [GroupCoordinator 1]: Preparing to rebalance group b53cde7a-481f-427a-882b-d5bcee52ac2a in state PreparingRebalance with old generation 0 (__consumer_offsets-47) (reason: Adding new member consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:56,449] INFO [GroupCoordinator 1]: Stabilized group bd340acf-32e5-46ed-9341-bc882164db21 generation 1 (__consumer_offsets-9) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:56,457] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:56,477] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:56,477] INFO [GroupCoordinator 1]: Assignment received from leader consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 for group bd340acf-32e5-46ed-9341-bc882164db21 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:57,115] INFO [GroupCoordinator 1]: Stabilized group b53cde7a-481f-427a-882b-d5bcee52ac2a generation 1 (__consumer_offsets-47) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:53 kafka | [2024-02-25 23:14:57,135] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 for group b53cde7a-481f-427a-882b-d5bcee52ac2a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:53 ++ echo 'Tearing down containers...' 23:16:53 Tearing down containers... 23:16:53 ++ docker-compose down -v --remove-orphans 23:16:53 Stopping policy-apex-pdp ... 23:16:53 Stopping policy-pap ... 23:16:53 Stopping kafka ... 23:16:53 Stopping policy-api ... 23:16:53 Stopping grafana ... 23:16:53 Stopping simulator ... 23:16:53 Stopping compose_zookeeper_1 ... 23:16:53 Stopping mariadb ... 23:16:53 Stopping prometheus ... 23:16:54 Stopping grafana ... done 23:16:54 Stopping prometheus ... done 23:17:04 Stopping policy-apex-pdp ... done 23:17:14 Stopping simulator ... done 23:17:14 Stopping policy-pap ... done 23:17:15 Stopping mariadb ... done 23:17:15 Stopping kafka ... done 23:17:16 Stopping compose_zookeeper_1 ... done 23:17:25 Stopping policy-api ... done 23:17:25 Removing policy-apex-pdp ... 23:17:25 Removing policy-pap ... 23:17:25 Removing kafka ... 23:17:25 Removing policy-api ... 23:17:25 Removing policy-db-migrator ... 23:17:25 Removing grafana ... 23:17:25 Removing simulator ... 23:17:25 Removing compose_zookeeper_1 ... 23:17:25 Removing mariadb ... 23:17:25 Removing prometheus ... 23:17:25 Removing policy-api ... done 23:17:25 Removing simulator ... done 23:17:25 Removing policy-pap ... done 23:17:25 Removing policy-apex-pdp ... done 23:17:25 Removing policy-db-migrator ... done 23:17:25 Removing compose_zookeeper_1 ... done 23:17:25 Removing grafana ... done 23:17:25 Removing kafka ... done 23:17:25 Removing prometheus ... done 23:17:25 Removing mariadb ... done 23:17:25 Removing network compose_default 23:17:25 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:25 + load_set 23:17:25 + _setopts=hxB 23:17:25 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:25 ++ tr : ' ' 23:17:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:25 + set +o braceexpand 23:17:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:25 + set +o hashall 23:17:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:25 + set +o interactive-comments 23:17:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:25 + set +o xtrace 23:17:25 ++ echo hxB 23:17:25 ++ sed 's/./& /g' 23:17:25 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:25 + set +h 23:17:25 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:25 + set +x 23:17:25 + [[ -n /tmp/tmp.Nh0lglCdc7 ]] 23:17:25 + rsync -av /tmp/tmp.Nh0lglCdc7/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:25 sending incremental file list 23:17:25 ./ 23:17:25 log.html 23:17:25 output.xml 23:17:25 report.html 23:17:25 testplan.txt 23:17:25 23:17:25 sent 918,975 bytes received 95 bytes 1,838,140.00 bytes/sec 23:17:25 total size is 918,429 speedup is 1.00 23:17:25 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:25 + exit 0 23:17:25 $ ssh-agent -k 23:17:25 unset SSH_AUTH_SOCK; 23:17:25 unset SSH_AGENT_PID; 23:17:25 echo Agent pid 2142 killed; 23:17:25 [ssh-agent] Stopped. 23:17:25 Robot results publisher started... 23:17:25 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:25 -Parsing output xml: 23:17:26 Done! 23:17:26 WARNING! Could not find file: **/log.html 23:17:26 WARNING! Could not find file: **/report.html 23:17:26 -Copying log files to build dir: 23:17:26 Done! 23:17:26 -Assigning results to build: 23:17:26 Done! 23:17:26 -Checking thresholds: 23:17:26 Done! 23:17:26 Done publishing Robot results. 23:17:26 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:26 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3777363861329545104.sh 23:17:26 ---> sysstat.sh 23:17:26 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1488974624590515388.sh 23:17:26 ---> package-listing.sh 23:17:26 ++ facter osfamily 23:17:26 ++ tr '[:upper:]' '[:lower:]' 23:17:26 + OS_FAMILY=debian 23:17:26 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:26 + START_PACKAGES=/tmp/packages_start.txt 23:17:26 + END_PACKAGES=/tmp/packages_end.txt 23:17:26 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:26 + PACKAGES=/tmp/packages_start.txt 23:17:26 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:26 + PACKAGES=/tmp/packages_end.txt 23:17:26 + case "${OS_FAMILY}" in 23:17:26 + dpkg -l 23:17:26 + grep '^ii' 23:17:26 + '[' -f /tmp/packages_start.txt ']' 23:17:26 + '[' -f /tmp/packages_end.txt ']' 23:17:26 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:26 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:26 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:26 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:27 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1001023641922184474.sh 23:17:27 ---> capture-instance-metadata.sh 23:17:27 Setup pyenv: 23:17:27 system 23:17:27 3.8.13 23:17:27 3.9.13 23:17:27 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:27 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv 23:17:28 lf-activate-venv(): INFO: Installing: lftools 23:17:40 lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH 23:17:40 INFO: Running in OpenStack, capturing instance metadata 23:17:40 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2194008244325888421.sh 23:17:40 provisioning config files... 23:17:40 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config10248940132880237207tmp 23:17:40 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:40 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:40 [EnvInject] - Injecting environment variables from a build step. 23:17:40 [EnvInject] - Injecting as environment variables the properties content 23:17:40 SERVER_ID=logs 23:17:40 23:17:40 [EnvInject] - Variables injected successfully. 23:17:40 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11184646282004448228.sh 23:17:40 ---> create-netrc.sh 23:17:40 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13091437881155059762.sh 23:17:40 ---> python-tools-install.sh 23:17:40 Setup pyenv: 23:17:40 system 23:17:40 3.8.13 23:17:40 3.9.13 23:17:40 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:40 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv 23:17:42 lf-activate-venv(): INFO: Installing: lftools 23:17:50 lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH 23:17:50 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9896728310841662378.sh 23:17:50 ---> sudo-logs.sh 23:17:50 Archiving 'sudo' log.. 23:17:51 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6869981220830927508.sh 23:17:51 ---> job-cost.sh 23:17:51 Setup pyenv: 23:17:51 system 23:17:51 3.8.13 23:17:51 3.9.13 23:17:51 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:51 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv 23:17:52 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:58 lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH 23:17:58 INFO: No Stack... 23:17:59 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:59 INFO: Archiving Costs 23:17:59 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins13638010387743333355.sh 23:17:59 ---> logs-deploy.sh 23:17:59 Setup pyenv: 23:17:59 system 23:17:59 3.8.13 23:17:59 3.9.13 23:17:59 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:59 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv 23:18:00 lf-activate-venv(): INFO: Installing: lftools 23:18:09 lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH 23:18:09 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1591 23:18:09 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:10 Archives upload complete. 23:18:10 INFO: archiving logs to Nexus 23:18:11 ---> uname -a: 23:18:11 Linux prd-ubuntu1804-docker-8c-8g-8694 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:11 23:18:11 23:18:11 ---> lscpu: 23:18:11 Architecture: x86_64 23:18:11 CPU op-mode(s): 32-bit, 64-bit 23:18:11 Byte Order: Little Endian 23:18:11 CPU(s): 8 23:18:11 On-line CPU(s) list: 0-7 23:18:11 Thread(s) per core: 1 23:18:11 Core(s) per socket: 1 23:18:11 Socket(s): 8 23:18:11 NUMA node(s): 1 23:18:11 Vendor ID: AuthenticAMD 23:18:11 CPU family: 23 23:18:11 Model: 49 23:18:11 Model name: AMD EPYC-Rome Processor 23:18:11 Stepping: 0 23:18:11 CPU MHz: 2799.998 23:18:11 BogoMIPS: 5599.99 23:18:11 Virtualization: AMD-V 23:18:11 Hypervisor vendor: KVM 23:18:11 Virtualization type: full 23:18:11 L1d cache: 32K 23:18:11 L1i cache: 32K 23:18:11 L2 cache: 512K 23:18:11 L3 cache: 16384K 23:18:11 NUMA node0 CPU(s): 0-7 23:18:11 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:11 23:18:11 23:18:11 ---> nproc: 23:18:11 8 23:18:11 23:18:11 23:18:11 ---> df -h: 23:18:11 Filesystem Size Used Avail Use% Mounted on 23:18:11 udev 16G 0 16G 0% /dev 23:18:11 tmpfs 3.2G 708K 3.2G 1% /run 23:18:11 /dev/vda1 155G 14G 142G 9% / 23:18:11 tmpfs 16G 0 16G 0% /dev/shm 23:18:11 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:11 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:11 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:11 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:11 23:18:11 23:18:11 ---> free -m: 23:18:11 total used free shared buff/cache available 23:18:11 Mem: 32167 859 25101 0 6206 30852 23:18:11 Swap: 1023 0 1023 23:18:11 23:18:11 23:18:11 ---> ip addr: 23:18:11 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:11 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:11 inet 127.0.0.1/8 scope host lo 23:18:11 valid_lft forever preferred_lft forever 23:18:11 inet6 ::1/128 scope host 23:18:11 valid_lft forever preferred_lft forever 23:18:11 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:11 link/ether fa:16:3e:a4:a0:2f brd ff:ff:ff:ff:ff:ff 23:18:11 inet 10.30.107.118/23 brd 10.30.107.255 scope global dynamic ens3 23:18:11 valid_lft 85930sec preferred_lft 85930sec 23:18:11 inet6 fe80::f816:3eff:fea4:a02f/64 scope link 23:18:11 valid_lft forever preferred_lft forever 23:18:11 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:11 link/ether 02:42:b4:d7:e4:b6 brd ff:ff:ff:ff:ff:ff 23:18:11 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:11 valid_lft forever preferred_lft forever 23:18:11 23:18:11 23:18:11 ---> sar -b -r -n DEV: 23:18:11 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-8694) 02/25/24 _x86_64_ (8 CPU) 23:18:11 23:18:11 23:10:24 LINUX RESTART (8 CPU) 23:18:11 23:18:11 23:11:01 tps rtps wtps bread/s bwrtn/s 23:18:11 23:12:01 114.85 36.13 78.72 1687.72 26761.41 23:18:11 23:13:01 126.40 23.20 103.20 2793.40 31793.10 23:18:11 23:14:01 212.07 0.17 211.90 15.42 122584.68 23:18:11 23:15:01 339.96 12.31 327.65 794.60 55741.94 23:18:11 23:16:01 19.43 0.00 19.43 0.00 19913.01 23:18:11 23:17:01 24.55 0.07 24.48 8.53 21138.71 23:18:11 23:18:01 68.18 1.95 66.23 112.10 21792.34 23:18:11 Average: 129.38 10.54 118.83 772.81 42847.75 23:18:11 23:18:11 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:11 23:12:01 30082168 31671396 2857052 8.67 69544 1831112 1456968 4.29 900916 1667092 155556 23:18:11 23:13:01 28917568 31664232 4021652 12.21 98960 2922284 1570140 4.62 991272 2662396 908888 23:18:11 23:14:01 25795500 31669464 7143720 21.69 140100 5861668 1457548 4.29 1018912 5599124 807752 23:18:11 23:15:01 23327516 29367544 9611704 29.18 156564 5992252 9091868 26.75 3499656 5506116 1660 23:18:11 23:16:01 23299336 29340064 9639884 29.27 156760 5992536 9101264 26.78 3529692 5503508 296 23:18:11 23:17:01 23332848 29400548 9606372 29.16 157124 6020724 8311384 24.45 3486740 5517740 396 23:18:11 23:18:01 25729972 31618020 7209248 21.89 160444 5853436 1615888 4.75 1300040 5365320 54952 23:18:11 Average: 25783558 30675895 7155662 21.72 134214 4924859 4657866 13.70 2103890 4545899 275643 23:18:11 23:18:11 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:11 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:12:01 ens3 73.70 53.14 943.46 21.20 0.00 0.00 0.00 0.00 23:18:11 23:12:01 lo 1.60 1.60 0.17 0.17 0.00 0.00 0.00 0.00 23:18:11 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:13:01 br-312cfb88b3b8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:13:01 ens3 194.07 137.13 5338.11 14.42 0.00 0.00 0.00 0.00 23:18:11 23:13:01 lo 7.00 7.00 0.65 0.65 0.00 0.00 0.00 0.00 23:18:11 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:14:01 br-312cfb88b3b8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:14:01 ens3 1010.10 560.31 26797.61 42.41 0.00 0.00 0.00 0.00 23:18:11 23:14:01 lo 6.25 6.25 0.63 0.63 0.00 0.00 0.00 0.00 23:18:11 23:15:01 veth476d79e 0.55 0.83 0.06 0.31 0.00 0.00 0.00 0.00 23:18:11 23:15:01 vetha208a11 1.70 1.90 0.34 0.18 0.00 0.00 0.00 0.00 23:18:11 23:15:01 veth0739b17 5.03 6.43 0.81 0.92 0.00 0.00 0.00 0.00 23:18:11 23:15:01 veth79bdb89 0.00 0.38 0.00 0.02 0.00 0.00 0.00 0.00 23:18:11 23:16:01 veth476d79e 0.25 0.20 0.02 0.01 0.00 0.00 0.00 0.00 23:18:11 23:16:01 vetha208a11 3.82 5.35 0.79 0.48 0.00 0.00 0.00 0.00 23:18:11 23:16:01 veth0739b17 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:18:11 23:16:01 veth79bdb89 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:17:01 vetha208a11 3.12 4.58 0.47 0.35 0.00 0.00 0.00 0.00 23:18:11 23:17:01 veth0739b17 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 23:18:11 23:17:01 veth79bdb89 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:17:01 veth690e18f 53.97 47.94 21.02 40.48 0.00 0.00 0.00 0.00 23:18:11 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 23:18:01 ens3 1629.82 1001.52 33994.85 174.98 0.00 0.00 0.00 0.00 23:18:11 23:18:01 lo 35.00 35.00 6.20 6.20 0.00 0.00 0.00 0.00 23:18:11 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:11 Average: ens3 195.52 116.55 4750.53 17.70 0.00 0.00 0.00 0.00 23:18:11 Average: lo 4.44 4.44 0.84 0.84 0.00 0.00 0.00 0.00 23:18:11 23:18:11 23:18:11 ---> sar -P ALL: 23:18:11 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-8694) 02/25/24 _x86_64_ (8 CPU) 23:18:11 23:18:11 23:10:24 LINUX RESTART (8 CPU) 23:18:11 23:18:11 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:18:11 23:12:01 all 9.75 0.00 0.79 2.51 0.03 86.92 23:18:11 23:12:01 0 0.83 0.00 0.28 0.07 0.02 98.80 23:18:11 23:12:01 1 1.46 0.00 0.27 0.70 0.02 97.56 23:18:11 23:12:01 2 0.83 0.00 0.47 0.22 0.00 98.48 23:18:11 23:12:01 3 13.77 0.00 0.73 1.17 0.07 84.26 23:18:11 23:12:01 4 31.92 0.00 1.82 1.25 0.03 64.98 23:18:11 23:12:01 5 25.11 0.00 2.07 1.88 0.03 70.91 23:18:11 23:12:01 6 0.99 0.00 0.42 14.78 0.03 83.78 23:18:11 23:12:01 7 3.04 0.00 0.28 0.02 0.02 96.64 23:18:11 23:13:01 all 10.85 0.00 1.93 2.39 0.04 84.80 23:18:11 23:13:01 0 28.77 0.00 3.21 2.16 0.05 65.81 23:18:11 23:13:01 1 12.42 0.00 2.05 0.44 0.03 85.06 23:18:11 23:13:01 2 6.70 0.00 1.54 0.05 0.03 91.67 23:18:11 23:13:01 3 2.54 0.00 0.94 1.32 0.02 95.19 23:18:11 23:13:01 4 12.21 0.00 2.19 1.21 0.03 84.36 23:18:11 23:13:01 5 16.67 0.00 1.99 1.24 0.07 80.03 23:18:11 23:13:01 6 3.23 0.00 1.59 10.87 0.03 84.28 23:18:11 23:13:01 7 4.30 0.00 1.92 1.77 0.07 91.93 23:18:11 23:14:01 all 11.53 0.00 5.20 8.07 0.06 75.15 23:18:11 23:14:01 0 10.65 0.00 5.59 0.96 0.05 82.75 23:18:11 23:14:01 1 14.30 0.00 5.48 0.17 0.07 79.98 23:18:11 23:14:01 2 13.06 0.00 5.11 0.10 0.07 81.66 23:18:11 23:14:01 3 11.45 0.00 4.94 8.56 0.07 74.99 23:18:11 23:14:01 4 11.90 0.00 7.11 19.30 0.07 61.62 23:18:11 23:14:01 5 10.67 0.00 4.75 16.14 0.07 68.37 23:18:11 23:14:01 6 9.42 0.00 4.14 18.39 0.05 68.00 23:18:11 23:14:01 7 10.72 0.00 4.52 1.10 0.05 83.61 23:18:11 23:15:01 all 28.84 0.00 4.13 4.15 0.08 62.80 23:18:11 23:15:01 0 26.96 0.00 4.02 1.13 0.08 67.82 23:18:11 23:15:01 1 18.99 0.00 3.40 2.07 0.07 75.47 23:18:11 23:15:01 2 31.81 0.00 4.51 3.30 0.07 60.32 23:18:11 23:15:01 3 36.29 0.00 4.42 0.49 0.07 58.74 23:18:11 23:15:01 4 33.11 0.00 4.10 0.84 0.07 61.88 23:18:11 23:15:01 5 28.24 0.00 4.03 1.69 0.08 65.96 23:18:11 23:15:01 6 32.05 0.00 4.53 16.63 0.10 46.68 23:18:11 23:15:01 7 23.37 0.00 3.95 7.11 0.07 65.50 23:18:11 23:16:01 all 5.08 0.00 0.51 1.19 0.06 93.17 23:18:11 23:16:01 0 3.99 0.00 0.47 0.00 0.07 95.48 23:18:11 23:16:01 1 5.04 0.00 0.42 0.02 0.05 94.47 23:18:11 23:16:01 2 4.86 0.00 0.60 0.03 0.07 94.44 23:18:11 23:16:01 3 4.84 0.00 0.45 0.08 0.08 94.54 23:18:11 23:16:01 4 6.97 0.00 0.75 0.03 0.07 92.17 23:18:11 23:16:01 5 5.21 0.00 0.48 0.00 0.05 94.25 23:18:11 23:16:01 6 4.84 0.00 0.48 0.00 0.03 94.64 23:18:11 23:16:01 7 4.89 0.00 0.42 9.34 0.07 85.29 23:18:11 23:17:01 all 1.39 0.00 0.33 1.26 0.05 96.97 23:18:11 23:17:01 0 1.65 0.00 0.37 0.08 0.05 97.85 23:18:11 23:17:01 1 1.14 0.00 0.35 0.00 0.05 98.46 23:18:11 23:17:01 2 1.50 0.00 0.35 0.48 0.02 97.64 23:18:11 23:17:01 3 1.00 0.00 0.35 0.03 0.07 98.55 23:18:11 23:17:01 4 1.39 0.00 0.32 0.02 0.07 98.21 23:18:11 23:17:01 5 1.97 0.00 0.27 0.08 0.03 97.65 23:18:11 23:17:01 6 1.33 0.00 0.32 0.02 0.03 98.30 23:18:11 23:17:01 7 1.10 0.00 0.37 9.33 0.08 89.12 23:18:11 23:18:01 all 6.95 0.00 0.74 1.61 0.04 90.66 23:18:11 23:18:01 0 2.55 0.00 0.58 0.28 0.02 96.57 23:18:11 23:18:01 1 2.74 0.00 0.62 1.19 0.03 95.41 23:18:11 23:18:01 2 0.70 0.00 0.57 0.28 0.03 98.41 23:18:11 23:18:01 3 5.61 0.00 0.73 0.27 0.03 93.36 23:18:11 23:18:01 4 3.19 0.00 0.77 0.10 0.02 95.93 23:18:11 23:18:01 5 0.88 0.00 0.47 0.10 0.02 98.53 23:18:11 23:18:01 6 37.10 0.00 1.50 0.87 0.05 60.48 23:18:11 23:18:01 7 2.84 0.00 0.65 9.78 0.05 86.67 23:18:11 Average: all 10.61 0.00 1.94 3.01 0.05 84.39 23:18:11 Average: 0 10.75 0.00 2.07 0.67 0.05 86.47 23:18:11 Average: 1 8.01 0.00 1.80 0.66 0.05 89.49 23:18:11 Average: 2 8.47 0.00 1.87 0.64 0.04 88.99 23:18:11 Average: 3 10.76 0.00 1.79 1.69 0.06 85.71 23:18:11 Average: 4 14.37 0.00 2.43 3.22 0.05 79.93 23:18:11 Average: 5 12.66 0.00 2.00 2.99 0.05 82.30 23:18:11 Average: 6 12.69 0.00 1.85 8.76 0.05 76.65 23:18:11 Average: 7 7.16 0.00 1.72 5.50 0.06 85.56 23:18:11 23:18:11 23:18:11