Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-8694 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-vDNGYFMnrIpo/agent.2140 SSH_AGENT_PID=2142 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_11347858321556987045.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_11347858321556987045.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 # timeout=30 Commit message: "Fix config files removing hibernate deprecated properties and changing robot deprecated commands in test files" > git rev-list --no-walk 8361cb0e3663a610a46bc5ea8a0cc783ade26f89 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10884642917397392476.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-NbUn lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH Generating Requirements File ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. Python 3.10.6 pip 24.0 from /tmp/venv-NbUn/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.2 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.49 botocore==1.34.49 bs4==0.0.2 cachetools==5.3.2 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email-validator==2.1.0.post1 filelock==3.13.1 future==1.0.0 gitdb==4.0.11 GitPython==3.1.42 google-auth==2.28.1 httplib2==0.22.0 identify==2.5.35 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.9 lxml==5.1.0 MarkupSafe==2.1.5 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.4.0 oslo.i18n==6.3.0 oslo.log==5.5.0 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==23.2 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.10.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.2.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.1 python-swiftclient==4.4.0 PyYAML==6.0.1 referencing==0.33.0 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.2 typing_extensions==4.10.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.25.1 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins5787392273589121354.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins2761987451037863148.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.Wga7XYV1uF ++ echo ROBOT_VENV=/tmp/tmp.Wga7XYV1uF +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.Wga7XYV1uF ++ source /tmp/tmp.Wga7XYV1uF/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.Wga7XYV1uF +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.Wga7XYV1uF/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.Wga7XYV1uF) ' '!=' x ']' +++ PS1='(tmp.Wga7XYV1uF) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.Wga7XYV1uF/src/onap ++ rm -rf /tmp/tmp.Wga7XYV1uF/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.Wga7XYV1uF/bin/activate + '[' -z /tmp/tmp.Wga7XYV1uF/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.Wga7XYV1uF/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.Wga7XYV1uF ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.Wga7XYV1uF/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.Wga7XYV1uF) ' ++ '[' 'x(tmp.Wga7XYV1uF) ' '!=' x ']' ++ PS1='(tmp.Wga7XYV1uF) (tmp.Wga7XYV1uF) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.Nh0lglCdc7 + cd /tmp/tmp.Nh0lglCdc7 + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:042258e3578a558ce41b036104dfa997b2d25151ab6889a3f4d6187e27b1176c Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:59b5cc74cb5bbcb86c2e85d974415cfa4a6270c5728a7a489a5c6eece42f2b45 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:71cc3c3555fddbd324c5ddec27e24db340b82732d2f6ce50eddcfdf6715a7ab2 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating mariadb ... Creating prometheus ... Creating compose_zookeeper_1 ... Creating simulator ... Creating prometheus ... done Creating grafana ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating grafana ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating simulator ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds kafka Up 12 seconds policy-api Up 17 seconds grafana Up 16 seconds simulator Up 13 seconds compose_zookeeper_1 Up 14 seconds mariadb Up 18 seconds prometheus Up 19 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds kafka Up 17 seconds policy-api Up 22 seconds grafana Up 21 seconds simulator Up 18 seconds compose_zookeeper_1 Up 19 seconds mariadb Up 23 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds kafka Up 22 seconds policy-api Up 27 seconds grafana Up 26 seconds simulator Up 23 seconds compose_zookeeper_1 Up 24 seconds mariadb Up 29 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds kafka Up 27 seconds policy-api Up 32 seconds grafana Up 31 seconds simulator Up 28 seconds compose_zookeeper_1 Up 29 seconds mariadb Up 34 seconds prometheus Up 35 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds policy-api Up 37 seconds grafana Up 36 seconds simulator Up 34 seconds compose_zookeeper_1 Up 34 seconds mariadb Up 39 seconds prometheus Up 40 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds kafka Up 37 seconds policy-api Up 42 seconds grafana Up 41 seconds simulator Up 39 seconds compose_zookeeper_1 Up 40 seconds mariadb Up 44 seconds prometheus Up 45 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:14:57 up 4 min, 0 users, load average: 3.13, 1.44, 0.58 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 13.6 us, 2.9 sy, 0.0 ni, 79.0 id, 4.4 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.9G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds kafka Up 38 seconds policy-api Up 42 seconds grafana Up 41 seconds simulator Up 39 seconds compose_zookeeper_1 Up 40 seconds mariadb Up 44 seconds prometheus Up 45 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dc441b915cab policy-apex-pdp 1.31% 185.6MiB / 31.41GiB 0.58% 10.2kB / 20kB 0B / 0B 49 5fd9678fc98d policy-pap 1.75% 462.6MiB / 31.41GiB 1.44% 32.3kB / 33.8kB 0B / 153MB 61 a31d97e8bb12 kafka 5.81% 398.4MiB / 31.41GiB 1.24% 75.9kB / 79.6kB 0B / 512kB 85 2aa965e89e62 policy-api 0.13% 770.3MiB / 31.41GiB 2.39% 1MB / 737kB 0B / 0B 56 1e9eb28c678a grafana 0.02% 49.85MiB / 31.41GiB 0.15% 18.9kB / 3.55kB 0B / 24MB 17 fe76a8ef66c7 simulator 0.09% 124.3MiB / 31.41GiB 0.39% 1.27kB / 0B 225kB / 0B 76 10eb860b5193 compose_zookeeper_1 0.13% 99.47MiB / 31.41GiB 0.31% 56.4kB / 49.8kB 0B / 385kB 60 f2e6a844e46f mariadb 0.02% 101.8MiB / 31.41GiB 0.32% 995kB / 1.19MB 11MB / 68.3MB 40 cde7f4d777b4 prometheus 0.00% 19.56MiB / 31.41GiB 0.06% 39.4kB / 1.95kB 4.1kB / 0B 13 + echo + cd /tmp/tmp.Nh0lglCdc7 + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.Nh0lglCdc7/output.xml Log: /tmp/tmp.Nh0lglCdc7/log.html Report: /tmp/tmp.Nh0lglCdc7/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:16:47 up 6 min, 0 users, load average: 0.83, 1.16, 0.57 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.8 us, 2.2 sy, 0.0 ni, 83.4 id, 3.5 wa, 0.0 hi, 0.0 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 3.0G 22G 1.3M 6.2G 27G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes prometheus Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS dc441b915cab policy-apex-pdp 0.48% 191.6MiB / 31.41GiB 0.60% 58kB / 92.9kB 0B / 0B 52 5fd9678fc98d policy-pap 0.50% 497.1MiB / 31.41GiB 1.55% 2.33MB / 819kB 0B / 153MB 65 a31d97e8bb12 kafka 3.29% 400.4MiB / 31.41GiB 1.24% 245kB / 220kB 0B / 610kB 85 2aa965e89e62 policy-api 0.11% 770.3MiB / 31.41GiB 2.39% 2.49MB / 1.29MB 0B / 0B 58 1e9eb28c678a grafana 0.03% 59.95MiB / 31.41GiB 0.19% 19.8kB / 4.54kB 0B / 24MB 17 fe76a8ef66c7 simulator 0.06% 124.4MiB / 31.41GiB 0.39% 1.5kB / 0B 225kB / 0B 78 10eb860b5193 compose_zookeeper_1 0.11% 99.47MiB / 31.41GiB 0.31% 59.4kB / 51.5kB 0B / 385kB 60 f2e6a844e46f mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.6MB 28 cde7f4d777b4 prometheus 0.00% 25.29MiB / 31.41GiB 0.08% 219kB / 11.8kB 4.1kB / 0B 13 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, policy-api, policy-db-migrator, grafana, simulator, compose_zookeeper_1, mariadb, prometheus zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-02-25 23:14:20,723] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,732] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,732] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,732] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,732] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,734] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-25 23:14:20,734] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-25 23:14:20,734] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-25 23:14:20,734] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-02-25 23:14:20,736] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-02-25 23:14:20,736] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,736] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,736] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,736] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,736] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-25 23:14:20,736] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-02-25 23:14:20,750] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-02-25 23:14:20,752] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-25 23:14:20,753] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-25 23:14:20,755] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-25 23:14:20,765] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,765] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,765] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,765] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,765] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,766] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,766] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,766] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,766] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,766] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:host.name=10eb860b5193 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=settings t=2024-02-25T23:14:15.626168028Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-25T23:14:15Z grafana | logger=settings t=2024-02-25T23:14:15.626636877Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-02-25T23:14:15.626653987Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-02-25T23:14:15.626657807Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-02-25T23:14:15.626661547Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-02-25T23:14:15.626665327Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-25T23:14:15.626668647Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-25T23:14:15.626671837Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-02-25T23:14:15.626675018Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-02-25T23:14:15.626678948Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-02-25T23:14:15.626683138Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-25T23:14:15.626686518Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-25T23:14:15.626689788Z level=info msg=Target target=[all] grafana | logger=settings t=2024-02-25T23:14:15.626696598Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-02-25T23:14:15.626701308Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-02-25T23:14:15.626704638Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-02-25T23:14:15.626709168Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-02-25T23:14:15.626713228Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-02-25T23:14:15.626717818Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-02-25T23:14:15.627100636Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-02-25T23:14:15.627130456Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-02-25T23:14:15.62783049Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-02-25T23:14:15.629204916Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-02-25T23:14:15.630130304Z level=info msg="Migration successfully executed" id="create migration_log table" duration=924.768µs grafana | logger=migrator t=2024-02-25T23:14:15.634606041Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-02-25T23:14:15.635192362Z level=info msg="Migration successfully executed" id="create user table" duration=583.971µs grafana | logger=migrator t=2024-02-25T23:14:15.642062415Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-02-25T23:14:15.643414101Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.350096ms grafana | logger=migrator t=2024-02-25T23:14:15.649322765Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-02-25T23:14:15.650567209Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.242934ms grafana | logger=migrator t=2024-02-25T23:14:15.657515293Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-02-25T23:14:15.658790628Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.274255ms zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,767] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,768] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,768] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-02-25 23:14:20,769] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,770] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,770] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-25 23:14:20,770] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-25 23:14:20,771] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-25 23:14:20,774] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,774] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,774] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-25 23:14:20,774] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-25 23:14:20,774] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:20,805] INFO Logging initialized @684ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-02-25 23:14:20,906] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-25 23:14:20,906] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-25 23:14:20,929] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-25 23:14:20,966] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-25 23:14:20,966] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-25 23:14:20,968] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-25 23:14:20,975] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-02-25 23:14:20,985] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-25 23:14:21,002] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-02-25 23:14:21,003] INFO Started @882ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-25 23:14:21,003] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-02-25 23:14:21,008] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-25 23:14:21,010] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-25 23:14:21,011] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-25 23:14:21,013] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-25 23:14:21,028] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-25 23:14:21,029] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-25 23:14:21,030] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-25 23:14:21,030] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-25 23:14:21,037] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-02-25 23:14:21,037] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-25 23:14:21,041] INFO Snapshot loaded in 11 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-25 23:14:21,042] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-25 23:14:21,043] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-25 23:14:21,053] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-02-25 23:14:21,053] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-02-25 23:14:21,069] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-02-25 23:14:21,070] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-02-25 23:14:23,562] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) grafana | logger=migrator t=2024-02-25T23:14:15.707723363Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-02-25T23:14:15.708853784Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.130521ms grafana | logger=migrator t=2024-02-25T23:14:15.715981502Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-02-25T23:14:15.7204989Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.515638ms grafana | logger=migrator t=2024-02-25T23:14:15.725039827Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-02-25T23:14:15.725998446Z level=info msg="Migration successfully executed" id="create user table v2" duration=959.319µs grafana | logger=migrator t=2024-02-25T23:14:15.730272278Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-02-25T23:14:15.731098754Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=825.766µs grafana | logger=migrator t=2024-02-25T23:14:15.738392945Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-02-25T23:14:15.739255231Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=861.996µs grafana | logger=migrator t=2024-02-25T23:14:15.743997693Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-02-25T23:14:15.744740137Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=741.534µs grafana | logger=migrator t=2024-02-25T23:14:15.749655292Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-02-25T23:14:15.750534489Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=856.987µs grafana | logger=migrator t=2024-02-25T23:14:15.756957994Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-02-25T23:14:15.758797639Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.838495ms grafana | logger=migrator t=2024-02-25T23:14:15.763023401Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-02-25T23:14:15.763054811Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.99µs grafana | logger=migrator t=2024-02-25T23:14:15.767893115Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-02-25T23:14:15.769499676Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.60139ms grafana | logger=migrator t=2024-02-25T23:14:15.773451823Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-02-25T23:14:15.773891881Z level=info msg="Migration successfully executed" id="Add missing user data" duration=439.559µs grafana | logger=migrator t=2024-02-25T23:14:15.779898557Z level=info msg="Executing migration" id="Add is_disabled column to user" mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-25 23:14:12+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-02-25 23:14:12 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-25 23:14:12 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-25 23:14:12 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-02-25 23:14:14+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-02-25 23:14:14+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-02-25 23:14:14+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-02-25 23:14:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-25 23:14:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-25 23:14:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-25 23:14:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-25 23:14:14 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-02-25 23:14:14 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-25 23:14:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-25 23:14:14 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-25 23:14:14 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-25 23:14:14 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-02-25 23:14:15+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-02-25 23:14:17+00:00 [Note] [Entrypoint]: Creating user policy_user grafana | logger=migrator t=2024-02-25T23:14:15.781036618Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.137301ms grafana | logger=migrator t=2024-02-25T23:14:15.78473373Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-02-25T23:14:15.785540526Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=806.126µs grafana | logger=migrator t=2024-02-25T23:14:15.789312358Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-02-25T23:14:15.790601644Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.285675ms grafana | logger=migrator t=2024-02-25T23:14:15.794727933Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-02-25T23:14:15.808527839Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.810686ms grafana | logger=migrator t=2024-02-25T23:14:15.814483384Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-02-25T23:14:15.815114696Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=626.842µs grafana | logger=migrator t=2024-02-25T23:14:15.818976021Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-02-25T23:14:15.819818667Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=842.596µs grafana | logger=migrator t=2024-02-25T23:14:15.823488838Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-02-25T23:14:15.824350565Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=861.527µs grafana | logger=migrator t=2024-02-25T23:14:15.830576895Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-02-25T23:14:15.83135693Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=780.655µs grafana | logger=migrator t=2024-02-25T23:14:15.835867878Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-02-25T23:14:15.837088301Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.221704ms grafana | logger=migrator t=2024-02-25T23:14:15.841661399Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-02-25T23:14:15.84169978Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=38.961µs grafana | logger=migrator t=2024-02-25T23:14:15.849321418Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-02-25T23:14:15.850104952Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=785.875µs grafana | logger=migrator t=2024-02-25T23:14:15.855263572Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-02-25T23:14:15.856446645Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.183063ms grafana | logger=migrator t=2024-02-25T23:14:15.860926121Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-02-25T23:14:15.862105784Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.179603ms grafana | logger=migrator t=2024-02-25T23:14:15.868089339Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-02-25T23:14:15.869343004Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.252995ms grafana | logger=migrator t=2024-02-25T23:14:15.874628956Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-25T23:14:15.879857837Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.229621ms grafana | logger=migrator t=2024-02-25T23:14:15.884010717Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-02-25T23:14:15.884803532Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=791.935µs grafana | logger=migrator t=2024-02-25T23:14:15.890267658Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-02-25T23:14:15.891081934Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=813.996µs grafana | logger=migrator t=2024-02-25T23:14:15.895887776Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-02-25T23:14:15.896719633Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=831.497µs grafana | logger=migrator t=2024-02-25T23:14:15.900897563Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-02-25T23:14:15.901740419Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=842.316µs grafana | logger=migrator t=2024-02-25T23:14:15.907158024Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-02-25T23:14:15.90850959Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.350696ms grafana | logger=migrator t=2024-02-25T23:14:15.91365463Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-02-25T23:14:15.914362543Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=709.783µs grafana | logger=migrator t=2024-02-25T23:14:15.920202336Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-02-25T23:14:15.920753966Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=550.99µs grafana | logger=migrator t=2024-02-25T23:14:15.924696143Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-02-25T23:14:15.925363726Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=654.293µs grafana | logger=migrator t=2024-02-25T23:14:15.930500235Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-02-25T23:14:15.931450143Z level=info msg="Migration successfully executed" id="create star table" duration=949.128µs grafana | logger=migrator t=2024-02-25T23:14:15.935816348Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-02-25T23:14:15.936630703Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=890.998µs grafana | logger=migrator t=2024-02-25T23:14:15.942260852Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-02-25T23:14:15.943448765Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.187063ms grafana | logger=migrator t=2024-02-25T23:14:15.952479039Z level=info msg="Executing migration" id="create index UQE_org_name - v1" mariadb | 2024-02-25 23:14:17+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-02-25 23:14:17+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-02-25 23:14:17+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-02-25 23:14:18+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Buffer pool(s) dump completed at 240225 23:14:18 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Shutdown completed; log sequence number 329120; transaction id 298 mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-02-25 23:14:18+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-02-25 23:14:18+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... grafana | logger=migrator t=2024-02-25T23:14:15.953840916Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.361637ms grafana | logger=migrator t=2024-02-25T23:14:15.958889703Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-02-25T23:14:15.959928153Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.0375ms grafana | logger=migrator t=2024-02-25T23:14:15.964657405Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-02-25T23:14:15.966057541Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.399956ms grafana | logger=migrator t=2024-02-25T23:14:15.97118908Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-02-25T23:14:15.971999746Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=807.106µs grafana | logger=migrator t=2024-02-25T23:14:15.97737186Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-02-25T23:14:15.978840118Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.459328ms grafana | logger=migrator t=2024-02-25T23:14:15.983137301Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-02-25T23:14:15.983180742Z level=info msg="Migration successfully executed" id="Update org table charset" duration=44.741µs grafana | logger=migrator t=2024-02-25T23:14:15.987433474Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-02-25T23:14:15.987475665Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=39.831µs grafana | logger=migrator t=2024-02-25T23:14:15.99137175Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-02-25T23:14:15.991636665Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=264.395µs grafana | logger=migrator t=2024-02-25T23:14:15.997453248Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-02-25T23:14:15.998632551Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.178492ms grafana | logger=migrator t=2024-02-25T23:14:16.003525305Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-02-25T23:14:16.004863841Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.338176ms grafana | logger=migrator t=2024-02-25T23:14:16.009103412Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-02-25T23:14:16.010476038Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.371986ms grafana | logger=migrator t=2024-02-25T23:14:16.014497055Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-02-25T23:14:16.015175108Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=677.353µs grafana | logger=migrator t=2024-02-25T23:14:16.02053916Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-02-25T23:14:16.021373836Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=834.266µs grafana | logger=migrator t=2024-02-25T23:14:16.02631477Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-02-25T23:14:16.027941201Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.631751ms grafana | logger=migrator t=2024-02-25T23:14:16.033479886Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-02-25T23:14:16.042241233Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.764027ms grafana | logger=migrator t=2024-02-25T23:14:16.047788679Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-02-25T23:14:16.048758158Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=968.799µs grafana | logger=migrator t=2024-02-25T23:14:16.052837996Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-02-25T23:14:16.053811335Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=976.569µs grafana | logger=migrator t=2024-02-25T23:14:16.057910163Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-02-25T23:14:16.058857011Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=947.248µs grafana | logger=migrator t=2024-02-25T23:14:16.065356026Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-02-25T23:14:16.06609406Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=737.834µs grafana | logger=migrator t=2024-02-25T23:14:16.070521086Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-02-25T23:14:16.071895032Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.374095ms grafana | logger=migrator t=2024-02-25T23:14:16.076158283Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-02-25T23:14:16.076244955Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=87.012µs grafana | logger=migrator t=2024-02-25T23:14:16.081292461Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-02-25T23:14:16.084428452Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.134761ms grafana | logger=migrator t=2024-02-25T23:14:16.120511874Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-02-25T23:14:16.123604014Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.08229ms grafana | logger=migrator t=2024-02-25T23:14:16.129183161Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.130561967Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.378055ms grafana | logger=migrator t=2024-02-25T23:14:16.136213395Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.137659553Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.445978ms grafana | logger=migrator t=2024-02-25T23:14:16.142412564Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.14533551Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.922266ms grafana | logger=migrator t=2024-02-25T23:14:16.150399067Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-25 23:14:18 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-25 23:14:18 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-25 23:14:18 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-25 23:14:18 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: log sequence number 329120; transaction id 299 mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-02-25 23:14:19 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-25 23:14:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-25 23:14:19 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-02-25 23:14:19 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-02-25 23:14:19 0 [Note] Server socket created on IP: '::'. mariadb | 2024-02-25 23:14:19 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-02-25 23:14:19 0 [Note] InnoDB: Buffer pool(s) load completed at 240225 23:14:19 mariadb | 2024-02-25 23:14:19 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-02-25 23:14:19 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-02-25 23:14:20 39 [Warning] Aborted connection 39 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-02-25 23:14:21 85 [Warning] Aborted connection 85 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) grafana | logger=migrator t=2024-02-25T23:14:16.151335705Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=936.578µs grafana | logger=migrator t=2024-02-25T23:14:16.157097775Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-02-25T23:14:16.158037314Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=939.229µs grafana | logger=migrator t=2024-02-25T23:14:16.161783876Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-02-25T23:14:16.161819006Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.54µs grafana | logger=migrator t=2024-02-25T23:14:16.167062447Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-02-25T23:14:16.167092778Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=31.851µs grafana | logger=migrator t=2024-02-25T23:14:16.172071113Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.176557638Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=4.468205ms grafana | logger=migrator t=2024-02-25T23:14:16.185382649Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.187922837Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.538248ms grafana | logger=migrator t=2024-02-25T23:14:16.192336522Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.195352449Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=3.015188ms grafana | logger=migrator t=2024-02-25T23:14:16.199384137Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.201497837Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.11305ms grafana | logger=migrator t=2024-02-25T23:14:16.206502553Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.206747428Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=244.895µs grafana | logger=migrator t=2024-02-25T23:14:16.211217734Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-02-25T23:14:16.212412487Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.195723ms grafana | logger=migrator t=2024-02-25T23:14:16.21671803Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-02-25T23:14:16.217611507Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=893.427µs grafana | logger=migrator t=2024-02-25T23:14:16.222557802Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-02-25T23:14:16.222599733Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=43.391µs grafana | logger=migrator t=2024-02-25T23:14:16.226857164Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-02-25T23:14:16.228115808Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.257655ms grafana | logger=migrator t=2024-02-25T23:14:16.234124493Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-02-25T23:14:16.234917738Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=788.135µs grafana | logger=migrator t=2024-02-25T23:14:16.240094948Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-02-25T23:14:16.251266282Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=11.172974ms grafana | logger=migrator t=2024-02-25T23:14:16.255254229Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-02-25T23:14:16.25584169Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=599.531µs grafana | logger=migrator t=2024-02-25T23:14:16.259942188Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-02-25T23:14:16.260872697Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=930.429µs grafana | logger=migrator t=2024-02-25T23:14:16.266346881Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-02-25T23:14:16.267253679Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=906.378µs grafana | logger=migrator t=2024-02-25T23:14:16.272647453Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-02-25T23:14:16.272967069Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=321.616µs grafana | logger=migrator t=2024-02-25T23:14:16.277076497Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-02-25T23:14:16.277866953Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=788.835µs grafana | logger=migrator t=2024-02-25T23:14:16.283238236Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-02-25T23:14:16.286976007Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.738501ms grafana | logger=migrator t=2024-02-25T23:14:16.292040184Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-02-25T23:14:16.29287369Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=833.326µs grafana | logger=migrator t=2024-02-25T23:14:16.298238203Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-02-25T23:14:16.298422947Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=187.334µs grafana | logger=migrator t=2024-02-25T23:14:16.30224153Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-02-25T23:14:16.302421453Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=180.243µs grafana | logger=migrator t=2024-02-25T23:14:16.308541011Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-02-25T23:14:16.310009329Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.467598ms grafana | logger=migrator t=2024-02-25T23:14:16.315097647Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-02-25T23:14:16.318887169Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.796412ms grafana | logger=migrator t=2024-02-25T23:14:16.3225356Z level=info msg="Executing migration" id="create data_source table" grafana | logger=migrator t=2024-02-25T23:14:16.323289254Z level=info msg="Migration successfully executed" id="create data_source table" duration=752.954µs grafana | logger=migrator t=2024-02-25T23:14:16.32935338Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-02-25T23:14:16.330182285Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=828.665µs grafana | logger=migrator t=2024-02-25T23:14:16.334231774Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-02-25T23:14:16.33510802Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=876.216µs grafana | logger=migrator t=2024-02-25T23:14:16.33976712Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-02-25T23:14:16.340978443Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.211273ms grafana | logger=migrator t=2024-02-25T23:14:16.346224144Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-02-25T23:14:16.347380905Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.156621ms grafana | logger=migrator t=2024-02-25T23:14:16.351473024Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-02-25T23:14:16.362322183Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=10.849479ms grafana | logger=migrator t=2024-02-25T23:14:16.366743317Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-02-25T23:14:16.367622385Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=878.798µs grafana | logger=migrator t=2024-02-25T23:14:16.372567799Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-02-25T23:14:16.373499267Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=930.878µs grafana | logger=migrator t=2024-02-25T23:14:16.377261289Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-02-25T23:14:16.378570344Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.308285ms grafana | logger=migrator t=2024-02-25T23:14:16.383909196Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-02-25T23:14:16.385219842Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.309865ms grafana | logger=migrator t=2024-02-25T23:14:16.389429492Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-02-25T23:14:16.393110233Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.679271ms grafana | logger=migrator t=2024-02-25T23:14:16.397058219Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-02-25T23:14:16.399533076Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.474488ms grafana | logger=migrator t=2024-02-25T23:14:16.404624044Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-02-25T23:14:16.404650765Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=27.371µs grafana | logger=migrator t=2024-02-25T23:14:16.409013778Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-02-25T23:14:16.409223552Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=209.444µs grafana | logger=migrator t=2024-02-25T23:14:16.413579786Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-02-25T23:14:16.417344498Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.764142ms grafana | logger=migrator t=2024-02-25T23:14:16.423023786Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-02-25T23:14:16.42321481Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=197.384µs grafana | logger=migrator t=2024-02-25T23:14:16.428355979Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-02-25T23:14:16.428605693Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=249.034µs grafana | logger=migrator t=2024-02-25T23:14:16.432721763Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-02-25T23:14:16.4362323Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.509987ms grafana | logger=migrator t=2024-02-25T23:14:16.441073803Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-02-25T23:14:16.441409719Z level=info msg="Migration successfully executed" id="Update uid value" duration=336.056µs grafana | logger=migrator t=2024-02-25T23:14:16.446204652Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-02-25T23:14:16.447460205Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.255033ms grafana | logger=migrator t=2024-02-25T23:14:16.45288631Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-02-25T23:14:16.453776376Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=889.506µs grafana | logger=migrator t=2024-02-25T23:14:16.458565879Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-02-25T23:14:16.45965637Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.086092ms grafana | logger=migrator t=2024-02-25T23:14:16.464347279Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-02-25T23:14:16.465581264Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.233325ms grafana | logger=migrator t=2024-02-25T23:14:16.516739045Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-02-25T23:14:16.51810662Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.368195ms grafana | logger=migrator t=2024-02-25T23:14:16.523934182Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-02-25T23:14:16.524563095Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=628.673µs grafana | logger=migrator t=2024-02-25T23:14:16.531066199Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-02-25T23:14:16.53218395Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.117861ms grafana | logger=migrator t=2024-02-25T23:14:16.539109214Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-02-25T23:14:16.540234435Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.125361ms grafana | logger=migrator t=2024-02-25T23:14:16.545052248Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-02-25T23:14:16.54623442Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.181483ms grafana | logger=migrator t=2024-02-25T23:14:16.551293947Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-02-25T23:14:16.560445462Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.155125ms grafana | logger=migrator t=2024-02-25T23:14:16.565351407Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-02-25T23:14:16.566239475Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=887.218µs grafana | logger=migrator t=2024-02-25T23:14:16.569693131Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-02-25T23:14:16.572095667Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=2.399675ms grafana | logger=migrator t=2024-02-25T23:14:16.577835977Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-02-25T23:14:16.579207852Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.372676ms grafana | logger=migrator t=2024-02-25T23:14:16.582824361Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-02-25T23:14:16.583671858Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=847.247µs grafana | logger=migrator t=2024-02-25T23:14:16.588026621Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-02-25T23:14:16.588401209Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=398.987µs grafana | logger=migrator t=2024-02-25T23:14:16.593621929Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-02-25T23:14:16.59418237Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=560.591µs grafana | logger=migrator t=2024-02-25T23:14:16.597930682Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-02-25T23:14:16.597957633Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=27.94µs grafana | logger=migrator t=2024-02-25T23:14:16.601634703Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-02-25T23:14:16.604230913Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.59637ms grafana | logger=migrator t=2024-02-25T23:14:16.609133547Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-02-25T23:14:16.611679386Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.545329ms grafana | logger=migrator t=2024-02-25T23:14:16.615933907Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-02-25T23:14:16.61610309Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=168.683µs grafana | logger=migrator t=2024-02-25T23:14:16.620600817Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-02-25T23:14:16.624049082Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.446285ms grafana | logger=migrator t=2024-02-25T23:14:16.629246412Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-02-25T23:14:16.631889723Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.644041ms grafana | logger=migrator t=2024-02-25T23:14:16.63593733Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-02-25T23:14:16.636627094Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=689.394µs grafana | logger=migrator t=2024-02-25T23:14:16.641793104Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-02-25T23:14:16.642340144Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=546.82µs grafana | logger=migrator t=2024-02-25T23:14:16.647924921Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-02-25T23:14:16.649044382Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.118671ms grafana | logger=migrator t=2024-02-25T23:14:16.654261692Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-02-25T23:14:16.655437385Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.174763ms grafana | logger=migrator t=2024-02-25T23:14:16.659534484Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-02-25T23:14:16.660716856Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.173372ms grafana | logger=migrator t=2024-02-25T23:14:16.665704652Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-02-25T23:14:16.666901745Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.196793ms grafana | logger=migrator t=2024-02-25T23:14:16.671300389Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-02-25T23:14:16.67136271Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=63.051µs grafana | logger=migrator t=2024-02-25T23:14:16.675761105Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-02-25T23:14:16.675790986Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.84µs grafana | logger=migrator t=2024-02-25T23:14:16.680118648Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-02-25T23:14:16.68382427Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.697692ms grafana | logger=migrator t=2024-02-25T23:14:16.689414297Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-02-25T23:14:16.692139628Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.725121ms grafana | logger=migrator t=2024-02-25T23:14:16.704967475Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-02-25T23:14:16.705194479Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=232.534µs grafana | logger=migrator t=2024-02-25T23:14:16.714679102Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-02-25T23:14:16.716052198Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.379146ms kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | SLF4J: Class path contains multiple SLF4J bindings. kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] kafka | [2024-02-25 23:14:23,492] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,492] INFO Client environment:host.name=a31d97e8bb12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,492] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,492] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,493] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,494] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,494] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,494] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,494] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,497] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,501] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-25 23:14:23,506] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-25 23:14:23,515] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:23,532] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:23,533] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:23,541] INFO Socket connection established, initiating session, client: /172.17.0.9:58238, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:23,579] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003c5ff0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:23,700] INFO Session: 0x1000003c5ff0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:23,700] INFO EventThread shut down for session: 0x1000003c5ff0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-02-25 23:14:24,404] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-02-25 23:14:24,771] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-02-25 23:14:24,846] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-02-25 23:14:24,847] INFO starting (kafka.server.KafkaServer) kafka | [2024-02-25 23:14:24,847] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-02-25 23:14:24,861] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-25 23:14:24,865] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:host.name=a31d97e8bb12 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,865] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,867] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@66746f57 (org.apache.zookeeper.ZooKeeper) kafka | [2024-02-25 23:14:24,871] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-02-25 23:14:24,877] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:24,879] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-25 23:14:24,884] INFO Opening socket connection to server zookeeper/172.17.0.3:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:24,892] INFO Socket connection established, initiating session, client: /172.17.0.9:53356, server: zookeeper/172.17.0.3:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:24,931] INFO Session establishment complete on server zookeeper/172.17.0.3:2181, session id = 0x1000003c5ff0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-02-25 23:14:24,937] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-02-25 23:14:25,221] INFO Cluster ID = EgVdN6KHQUyZtQ3qnQB0kQ (kafka.server.KafkaServer) kafka | [2024-02-25 23:14:25,225] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-02-25 23:14:25,277] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.6-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.local.retention.bytes = -2 kafka | log.local.retention.ms = -2 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = rsm.config. kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.partition.verification.enable = true kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-02-25T23:14:16.720721508Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-02-25T23:14:16.721617065Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=895.127µs grafana | logger=migrator t=2024-02-25T23:14:16.725408977Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-02-25T23:14:16.725442427Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=33.45µs grafana | logger=migrator t=2024-02-25T23:14:16.73288551Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-02-25T23:14:16.734137574Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.251294ms grafana | logger=migrator t=2024-02-25T23:14:16.739473536Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-02-25T23:14:16.740344094Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=870.378µs grafana | logger=migrator t=2024-02-25T23:14:16.750738893Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-02-25T23:14:16.755467614Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.72318ms grafana | logger=migrator t=2024-02-25T23:14:16.761298216Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-02-25T23:14:16.761326836Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=29.54µs grafana | logger=migrator t=2024-02-25T23:14:16.79544791Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-02-25T23:14:16.796882528Z level=info msg="Migration successfully executed" id="create session table" duration=1.433638ms grafana | logger=migrator t=2024-02-25T23:14:16.986409474Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-02-25T23:14:16.986637848Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=227.474µs grafana | logger=migrator t=2024-02-25T23:14:16.996081659Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-02-25T23:14:16.996438086Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=357.407µs grafana | logger=migrator t=2024-02-25T23:14:17.006788742Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-02-25T23:14:17.008186638Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.393576ms grafana | logger=migrator t=2024-02-25T23:14:17.013481256Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-02-25T23:14:17.014637134Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.155318ms grafana | logger=migrator t=2024-02-25T23:14:17.021042902Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-02-25T23:14:17.021121593Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=80.221µs grafana | logger=migrator t=2024-02-25T23:14:17.026261691Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-02-25T23:14:17.026443224Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=180.953µs grafana | logger=migrator t=2024-02-25T23:14:17.030502906Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-02-25T23:14:17.035411221Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.906816ms grafana | logger=migrator t=2024-02-25T23:14:17.041635745Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-02-25T23:14:17.045119408Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.482503ms grafana | logger=migrator t=2024-02-25T23:14:17.051395475Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-02-25T23:14:17.051545317Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=149.702µs grafana | logger=migrator t=2024-02-25T23:14:17.055762851Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-02-25T23:14:17.056085616Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=321.985µs grafana | logger=migrator t=2024-02-25T23:14:17.062828969Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-02-25T23:14:17.064064878Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.237029ms grafana | logger=migrator t=2024-02-25T23:14:17.071274868Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-02-25T23:14:17.071451681Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=176.313µs grafana | logger=migrator t=2024-02-25T23:14:17.087176881Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-02-25T23:14:17.090632374Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.458382ms grafana | logger=migrator t=2024-02-25T23:14:17.09766379Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-02-25T23:14:17.097856303Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=192.793µs grafana | logger=migrator t=2024-02-25T23:14:17.102521135Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-02-25T23:14:17.105803565Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.275119ms grafana | logger=migrator t=2024-02-25T23:14:17.109852847Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-02-25T23:14:17.114851402Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.998335ms grafana | logger=migrator t=2024-02-25T23:14:17.12060439Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-02-25T23:14:17.120701252Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=97.232µs grafana | logger=migrator t=2024-02-25T23:14:17.124433129Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-02-25T23:14:17.126074144Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.640095ms grafana | logger=migrator t=2024-02-25T23:14:17.130470711Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-02-25T23:14:17.131816091Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.34569ms grafana | logger=migrator t=2024-02-25T23:14:17.137862074Z level=info msg="Executing migration" id="create alert table v1" policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.8) policy-api | policy-api | [2024-02-25T23:14:27.866+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 23 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-02-25T23:14:27.869+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-02-25T23:14:29.750+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-02-25T23:14:29.860+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 99 ms. Found 6 JPA repository interfaces. policy-api | [2024-02-25T23:14:30.313+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-25T23:14:30.314+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-02-25T23:14:31.076+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-02-25T23:14:31.095+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-25T23:14:31.099+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-02-25T23:14:31.099+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-api | [2024-02-25T23:14:31.204+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-02-25T23:14:31.205+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3255 ms policy-api | [2024-02-25T23:14:31.678+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-02-25T23:14:31.799+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-02-25T23:14:31.804+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-02-25T23:14:31.854+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-02-25T23:14:32.245+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-02-25T23:14:32.268+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-02-25T23:14:32.376+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 policy-api | [2024-02-25T23:14:32.378+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-02-25T23:14:34.435+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-02-25T23:14:34.440+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-02-25T23:14:35.519+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-02-25 23:14:25,311] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-25 23:14:25,311] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-25 23:14:25,312] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-25 23:14:25,316] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-02-25 23:14:25,373] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-02-25 23:14:25,379] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-02-25 23:14:25,388] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) kafka | [2024-02-25 23:14:25,390] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-02-25 23:14:25,391] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-02-25 23:14:25,403] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-02-25 23:14:25,478] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-02-25 23:14:25,493] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-02-25 23:14:25,508] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-02-25 23:14:25,534] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-25 23:14:25,866] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-02-25 23:14:25,888] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-02-25 23:14:25,888] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-02-25 23:14:25,894] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-02-25 23:14:25,898] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-25 23:14:25,923] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:25,924] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:25,926] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:25,929] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:25,930] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:25,941] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-02-25 23:14:25,943] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) kafka | [2024-02-25 23:14:25,968] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-02-25 23:14:26,007] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708902865986,1708902865986,1,0,0,72057610244653057,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-02-25 23:14:26,008] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-02-25 23:14:26,065] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-02-25 23:14:26,073] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:26,079] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:26,080] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:26,095] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:26,101] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-02-25 23:14:26,105] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-02-25T23:14:17.139268575Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.406101ms grafana | logger=migrator t=2024-02-25T23:14:17.208697525Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-02-25T23:14:17.21038267Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.686035ms grafana | logger=migrator t=2024-02-25T23:14:17.216151539Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-02-25T23:14:17.217012022Z level=info msg="Migration successfully executed" id="add index alert state" duration=860.193µs grafana | logger=migrator t=2024-02-25T23:14:17.222482095Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-02-25T23:14:17.223980047Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.497472ms grafana | logger=migrator t=2024-02-25T23:14:17.22810883Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-02-25T23:14:17.228843731Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=735.581µs grafana | logger=migrator t=2024-02-25T23:14:17.236762072Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-02-25T23:14:17.238205393Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.442601ms grafana | logger=migrator t=2024-02-25T23:14:17.243625545Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-02-25T23:14:17.244804593Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.179838ms grafana | logger=migrator t=2024-02-25T23:14:17.249305401Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-02-25T23:14:17.263517877Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.209036ms grafana | logger=migrator t=2024-02-25T23:14:17.274571044Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-02-25T23:14:17.2755304Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=959.196µs grafana | logger=migrator t=2024-02-25T23:14:17.280668948Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-02-25T23:14:17.282215571Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.546463ms grafana | logger=migrator t=2024-02-25T23:14:17.393044202Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-02-25T23:14:17.393630811Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=581.439µs grafana | logger=migrator t=2024-02-25T23:14:17.400757349Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-02-25T23:14:17.401606702Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=858.684µs grafana | logger=migrator t=2024-02-25T23:14:17.406191471Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-02-25T23:14:17.406955054Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=764.653µs grafana | logger=migrator t=2024-02-25T23:14:17.411851367Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-02-25T23:14:17.415370101Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.518454ms grafana | logger=migrator t=2024-02-25T23:14:17.422806043Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-02-25T23:14:17.427136579Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.327696ms grafana | logger=migrator t=2024-02-25T23:14:17.43046117Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-02-25T23:14:17.433878921Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.420772ms grafana | logger=migrator t=2024-02-25T23:14:17.437347685Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-02-25T23:14:17.440757736Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.409661ms grafana | logger=migrator t=2024-02-25T23:14:17.463837216Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-02-25T23:14:17.465299808Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.462272ms grafana | logger=migrator t=2024-02-25T23:14:17.469000825Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-02-25T23:14:17.469082776Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=82.921µs grafana | logger=migrator t=2024-02-25T23:14:17.47460563Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-02-25T23:14:17.47462642Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=21.521µs grafana | logger=migrator t=2024-02-25T23:14:17.500883048Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-02-25T23:14:17.502065606Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.182648ms grafana | logger=migrator t=2024-02-25T23:14:17.509930495Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-02-25T23:14:17.511778793Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.846978ms grafana | logger=migrator t=2024-02-25T23:14:17.518321352Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-02-25T23:14:17.519123935Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=799.803µs kafka | [2024-02-25 23:14:26,116] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,122] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,125] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-02-25 23:14:26,128] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-02-25 23:14:26,130] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-02-25 23:14:26,130] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-02-25 23:14:26,170] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-02-25 23:14:26,170] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,172] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-02-25 23:14:26,183] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,186] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,189] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,204] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,205] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-02-25 23:14:26,209] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,215] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-02-25 23:14:26,221] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) grafana | logger=migrator t=2024-02-25T23:14:17.525668294Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-02-25T23:14:17.526810541Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.141687ms grafana | logger=migrator t=2024-02-25T23:14:17.533136598Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-02-25T23:14:17.53463254Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.491492ms grafana | logger=migrator t=2024-02-25T23:14:17.53859542Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-02-25T23:14:17.544556301Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.961401ms grafana | logger=migrator t=2024-02-25T23:14:17.549834961Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-02-25T23:14:17.553457775Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.621934ms grafana | logger=migrator t=2024-02-25T23:14:17.559883773Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-02-25T23:14:17.560089716Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=206.593µs grafana | logger=migrator t=2024-02-25T23:14:17.563771622Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-02-25T23:14:17.565238494Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.467182ms grafana | logger=migrator t=2024-02-25T23:14:17.570856699Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-02-25T23:14:17.571722202Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=866.243µs grafana | logger=migrator t=2024-02-25T23:14:17.578614677Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-02-25T23:14:17.582501546Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.881729ms grafana | logger=migrator t=2024-02-25T23:14:17.586163002Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-02-25T23:14:17.586254003Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=94.751µs grafana | logger=migrator t=2024-02-25T23:14:17.589403651Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-02-25T23:14:17.590311675Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=907.784µs grafana | logger=migrator t=2024-02-25T23:14:17.596145633Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-02-25T23:14:17.597108878Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=963.025µs grafana | logger=migrator t=2024-02-25T23:14:17.600977456Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-02-25T23:14:17.601082668Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=105.382µs grafana | logger=migrator t=2024-02-25T23:14:17.604269636Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-02-25T23:14:17.605053438Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=783.342µs grafana | logger=migrator t=2024-02-25T23:14:17.611840801Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-02-25T23:14:17.613282463Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.441542ms grafana | logger=migrator t=2024-02-25T23:14:17.616864998Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-02-25T23:14:17.617850972Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=946.084µs grafana | logger=migrator t=2024-02-25T23:14:17.62101382Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-02-25T23:14:17.621951614Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=938.624µs grafana | logger=migrator t=2024-02-25T23:14:17.629085933Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-02-25T23:14:17.630351871Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.265518ms grafana | logger=migrator t=2024-02-25T23:14:17.635042393Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-02-25T23:14:17.636034838Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=992.665µs grafana | logger=migrator t=2024-02-25T23:14:17.639759615Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-02-25T23:14:17.639786975Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.74µs grafana | logger=migrator t=2024-02-25T23:14:17.645481072Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.649411571Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.930649ms grafana | logger=migrator t=2024-02-25T23:14:17.652704731Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-02-25T23:14:17.653624895Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=919.494µs grafana | logger=migrator t=2024-02-25T23:14:17.656747613Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.660682062Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.931469ms policy-api | [2024-02-25T23:14:36.405+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-02-25T23:14:37.613+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-02-25T23:14:37.893+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@607c7f58, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4bbb00a4, org.springframework.security.web.context.SecurityContextHolderFilter@6e11d059, org.springframework.security.web.header.HeaderWriterFilter@1d123972, org.springframework.security.web.authentication.logout.LogoutFilter@54e1e8a7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@206d4413, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@19bd1f98, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@69cf9acb, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@543d242e, org.springframework.security.web.access.ExceptionTranslationFilter@5b3063b7, org.springframework.security.web.access.intercept.AuthorizationFilter@407bfc49] policy-api | [2024-02-25T23:14:38.918+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-02-25T23:14:39.026+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-02-25T23:14:39.055+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-02-25T23:14:39.073+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.044 seconds (process running for 12.672) policy-api | [2024-02-25T23:14:39.920+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-02-25T23:14:39.920+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-02-25T23:14:39.923+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-api | [2024-02-25T23:15:00.221+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: policy-api | [] kafka | [2024-02-25 23:14:26,225] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-02-25 23:14:26,231] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-02-25 23:14:26,231] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-02-25 23:14:26,235] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,236] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,237] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,237] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,241] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-02-25 23:14:26,241] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-02-25 23:14:26,241] INFO Kafka startTimeMs: 1708902866235 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-02-25 23:14:26,241] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,241] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,242] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,242] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-02-25 23:14:26,242] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-02-25 23:14:26,244] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,247] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-02-25 23:14:26,260] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-25 23:14:26,260] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-25 23:14:26,263] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-25 23:14:26,264] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-02-25 23:14:26,264] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-02-25 23:14:26,265] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-02-25 23:14:26,268] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-02-25 23:14:26,268] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,282] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,285] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,285] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,286] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-02-25 23:14:26,286] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,287] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,312] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:26,355] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-25 23:14:26,363] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-25 23:14:26,407] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-02-25 23:14:31,313] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:31,314] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:52,372] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-02-25 23:14:52,373] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-02-25 23:14:52,385] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:52,392] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-25T23:14:17.667184671Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-02-25T23:14:17.667828081Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=643.27µs grafana | logger=migrator t=2024-02-25T23:14:17.671638538Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-02-25T23:14:17.672567092Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=927.644µs grafana | logger=migrator t=2024-02-25T23:14:17.676848807Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-02-25T23:14:17.677758351Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=909.384µs grafana | logger=migrator t=2024-02-25T23:14:17.683908955Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-02-25T23:14:17.70011092Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.202105ms grafana | logger=migrator t=2024-02-25T23:14:17.706430216Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-02-25T23:14:17.706931154Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=500.848µs grafana | logger=migrator t=2024-02-25T23:14:17.710118822Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-02-25T23:14:17.711704236Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.584404ms grafana | logger=migrator t=2024-02-25T23:14:17.719327692Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-02-25T23:14:17.719663497Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=331.164µs policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-02-25T23:14:53.219+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-02-25T23:14:53.451+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = b53cde7a-481f-427a-882b-d5bcee52ac2a policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-02-25T23:14:53.620+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-25T23:14:53.620+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-apex-pdp | [2024-02-25T23:14:53.620+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902893618 policy-apex-pdp | [2024-02-25T23:14:53.623+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-1, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-02-25T23:14:53.636+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-02-25T23:14:53.636+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-02-25T23:14:53.640+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b53cde7a-481f-427a-882b-d5bcee52ac2a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-02-25T23:14:53.661+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-02-25T23:14:17.723707748Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-02-25T23:14:17.724217636Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=509.638µs grafana | logger=migrator t=2024-02-25T23:14:17.730603233Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-02-25T23:14:17.730903337Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=299.734µs grafana | logger=migrator t=2024-02-25T23:14:17.738753226Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.744454003Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.705957ms grafana | logger=migrator t=2024-02-25T23:14:17.790002333Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.794906308Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.904835ms grafana | logger=migrator t=2024-02-25T23:14:17.800084517Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.801210113Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.125026ms grafana | logger=migrator t=2024-02-25T23:14:17.806692467Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.807617261Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=918.864µs grafana | logger=migrator t=2024-02-25T23:14:17.811756074Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-02-25T23:14:17.811986827Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=230.973µs grafana | logger=migrator t=2024-02-25T23:14:17.818469036Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-02-25T23:14:17.824830322Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.360576ms grafana | logger=migrator t=2024-02-25T23:14:17.833626185Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-02-25T23:14:17.834349006Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=722.401µs grafana | logger=migrator t=2024-02-25T23:14:17.840836075Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-02-25T23:14:17.84112317Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=287.345µs grafana | logger=migrator t=2024-02-25T23:14:17.848409329Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-02-25T23:14:17.848868516Z level=info msg="Migration successfully executed" id="Move region to single row" duration=459.287µs grafana | logger=migrator t=2024-02-25T23:14:17.852966839Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.854322789Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.35595ms grafana | logger=migrator t=2024-02-25T23:14:17.859726301Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.860616845Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=888.554µs grafana | logger=migrator t=2024-02-25T23:14:17.865552989Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.866512255Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=958.966µs grafana | logger=migrator t=2024-02-25T23:14:17.871584732Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.872520646Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=931.844µs grafana | logger=migrator t=2024-02-25T23:14:17.876598887Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.878189592Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.589785ms grafana | logger=migrator t=2024-02-25T23:14:17.885421491Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-02-25T23:14:17.887036136Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.614035ms grafana | logger=migrator t=2024-02-25T23:14:17.901030728Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-02-25T23:14:17.901216031Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=213.273µs grafana | logger=migrator t=2024-02-25T23:14:17.909147041Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-02-25T23:14:17.910325099Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.180368ms grafana | logger=migrator t=2024-02-25T23:14:17.916549644Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-02-25T23:14:17.917840543Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.290409ms grafana | logger=migrator t=2024-02-25T23:14:17.928764129Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-02-25T23:14:17.931385469Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=2.6265ms grafana | logger=migrator t=2024-02-25T23:14:17.936928603Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-02-25T23:14:17.937914987Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=986.274µs policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 grafana | logger=migrator t=2024-02-25T23:14:17.942424866Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-02-25T23:14:17.94266288Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=239.854µs grafana | logger=migrator t=2024-02-25T23:14:17.948935305Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-02-25T23:14:17.949602145Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=667.01µs grafana | logger=migrator t=2024-02-25T23:14:17.957323162Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2 policy-apex-pdp | client.rack = policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = b53cde7a-481f-427a-882b-d5bcee52ac2a policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql kafka | [2024-02-25 23:14:52,420] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(9kyEG5R7S_ymSJoFuQGdeg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(19qiw_gSQSuGAZ9hqdP69g),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:52,423] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-02-25 23:14:52,428] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:17.958020722Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=692.78µs policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql grafana | logger=migrator t=2024-02-25T23:14:17.969311523Z level=info msg="Executing migration" id="create team table" policy-pap | Waiting for mariadb port 3306... policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-25 23:14:52,429] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d grafana | logger=migrator t=2024-02-25T23:14:17.970051995Z level=info msg="Migration successfully executed" id="create team table" duration=744.702µs policy-pap | mariadb (172.17.0.2:3306) open policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) grafana | logger=migrator t=2024-02-25T23:14:17.979246785Z level=info msg="Executing migration" id="add index team.org_id" simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-pap | Waiting for kafka port 9092... policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.0, branch=HEAD, revision=814b920e8a6345d35712b5857ebd4cb5e90fc107)" policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:17.98030014Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.052215ms simulator | overriding logback.xml policy-pap | kafka (172.17.0.9:9092) open policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@384077e1cf50, date=20240222-09:38:19, tags=netgo,builtinassets,stringlabels)" policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:17.986751539Z level=info msg="Executing migration" id="add unique index team_org_id_name" simulator | 2024-02-25 23:14:18,164 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-pap | Waiting for api port 6969... policy-apex-pdp | ssl.engine.factory.class = null kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:17.987702523Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=950.424µs simulator | 2024-02-25 23:14:18,248 INFO org.onap.policy.models.simulators starting policy-pap | api (172.17.0.8:6969) open policy-apex-pdp | ssl.key.password = null kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql grafana | logger=migrator t=2024-02-25T23:14:17.993399739Z level=info msg="Executing migration" id="Add column uid in team" simulator | 2024-02-25 23:14:18,248 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.546Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.001561603Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=8.162324ms simulator | 2024-02-25 23:14:18,459 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.550Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-25T23:14:18.006030581Z level=info msg="Executing migration" id="Update uid column values in team" simulator | 2024-02-25 23:14:18,460 INFO org.onap.policy.models.simulators starting A&AI simulator policy-pap | policy-apex-pdp | ssl.keystore.key = null prometheus | ts=2024-02-25T23:14:11.550Z caller=main.go:1118 level=info msg="Starting TSDB ..." policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.006217163Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=186.042µs simulator | 2024-02-25 23:14:18,584 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | . ____ _ __ _ _ kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-apex-pdp | ssl.keystore.location = null prometheus | ts=2024-02-25T23:14:11.557Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.013073285Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" simulator | 2024-02-25 23:14:18,606 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.557Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.014392264Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.318589ms simulator | 2024-02-25 23:14:18,610 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-apex-pdp | ssl.keystore.password = null kafka | [2024-02-25 23:14:52,430] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.559Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql grafana | logger=migrator t=2024-02-25T23:14:18.019410479Z level=info msg="Executing migration" id="create team member table" simulator | 2024-02-25 23:14:18,618 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-apex-pdp | ssl.keystore.type = JKS kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.559Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.32µs policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.02010863Z level=info msg="Migration successfully executed" id="create team member table" duration=698.481µs simulator | 2024-02-25 23:14:18,681 INFO Session workerName=node0 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.559Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-25T23:14:18.028252199Z level=info msg="Executing migration" id="add index team_member.org_id" simulator | 2024-02-25 23:14:19,289 INFO Using GSON for REST calls policy-pap | =========|_|==============|___/=/_/_/_/ policy-apex-pdp | ssl.provider = null kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.560Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.029797113Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.543954ms simulator | 2024-02-25 23:14:19,464 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} policy-pap | :: Spring Boot :: (v3.1.8) policy-apex-pdp | ssl.secure.random.implementation = null kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.560Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=174.604µs wal_replay_duration=448.569µs wbl_replay_duration=360ns total_replay_duration=653.233µs policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.037340185Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" simulator | 2024-02-25 23:14:19,478 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-pap | policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.562Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.039466436Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.12526ms simulator | 2024-02-25 23:14:19,486 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1852ms policy-pap | [2024-02-25T23:14:41.472+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 31 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-apex-pdp | ssl.truststore.certificates = null kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.562Z caller=main.go:1142 level=info msg="TSDB started" policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql grafana | logger=migrator t=2024-02-25T23:14:18.045474875Z level=info msg="Executing migration" id="add index team_member.team_id" simulator | 2024-02-25 23:14:19,486 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4124 ms. policy-pap | [2024-02-25T23:14:41.474+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-apex-pdp | ssl.truststore.location = null kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.562Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.046538161Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.063105ms simulator | 2024-02-25 23:14:19,495 INFO org.onap.policy.models.simulators starting SDNC simulator policy-pap | [2024-02-25T23:14:43.517+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-apex-pdp | ssl.truststore.password = null kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.564Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.204703ms db_storage=2.46µs remote_storage=2.52µs web_handler=790ns query_engine=2.29µs scrape=298.586µs scrape_sd=144.852µs notify=40.741µs notify_sd=12.94µs rules=3.2µs tracing=7.79µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-25T23:14:18.056265704Z level=info msg="Executing migration" id="Add column email to team table" simulator | 2024-02-25 23:14:19,499 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-pap | [2024-02-25T23:14:43.621+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 93 ms. Found 7 JPA repository interfaces. policy-apex-pdp | ssl.truststore.type = JKS kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.564Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.063893387Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.628283ms simulator | 2024-02-25 23:14:19,500 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-02-25T23:14:44.042+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) prometheus | ts=2024-02-25T23:14:11.564Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.070070308Z level=info msg="Executing migration" id="Add column external to team_member table" simulator | 2024-02-25 23:14:19,503 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-pap | [2024-02-25T23:14:44.042+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-apex-pdp | kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.075085273Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.013205ms simulator | 2024-02-25 23:14:19,504 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-pap | [2024-02-25T23:14:44.796+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql grafana | logger=migrator t=2024-02-25T23:14:18.081618539Z level=info msg="Executing migration" id="Add column permission to team_member table" simulator | 2024-02-25 23:14:19,517 INFO Session workerName=node0 policy-pap | [2024-02-25T23:14:44.807+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.08840046Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=6.779421ms simulator | 2024-02-25 23:14:19,593 INFO Using GSON for REST calls policy-pap | [2024-02-25T23:14:44.809+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902893669 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-25T23:14:18.095442584Z level=info msg="Executing migration" id="create dashboard acl table" simulator | 2024-02-25 23:14:19,608 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} policy-pap | [2024-02-25T23:14:44.810+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-apex-pdp | [2024-02-25T23:14:53.669+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Subscribed to topic(s): policy-pdp-pap kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.096230677Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=787.253µs simulator | 2024-02-25 23:14:19,610 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} policy-pap | [2024-02-25T23:14:44.933+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.103510794Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" simulator | 2024-02-25 23:14:19,610 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1976ms policy-pap | [2024-02-25T23:14:44.934+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3368 ms policy-db-migrator | policy-apex-pdp | [2024-02-25T23:14:53.670+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=39c8ecad-0633-4ba4-9ca4-00222bde67e2, alive=false, publisher=null]]: starting kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.10592978Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=2.414946ms policy-pap | [2024-02-25T23:14:45.401+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] simulator | 2024-02-25 23:14:19,613 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4890 ms. policy-db-migrator | policy-apex-pdp | [2024-02-25T23:14:53.683+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.112628959Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-pap | [2024-02-25T23:14:45.496+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 simulator | 2024-02-25 23:14:19,614 INFO org.onap.policy.models.simulators starting SO simulator policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-apex-pdp | acks = -1 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.113844237Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.214808ms policy-pap | [2024-02-25T23:14:45.500+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer simulator | 2024-02-25 23:14:19,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | -------------- policy-apex-pdp | auto.include.jmx.reporter = true kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.121111755Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-pap | [2024-02-25T23:14:45.550+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled simulator | 2024-02-25 23:14:19,623 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-apex-pdp | batch.size = 16384 kafka | [2024-02-25 23:14:52,431] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.122439594Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.327199ms policy-pap | [2024-02-25T23:14:45.942+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer simulator | 2024-02-25 23:14:19,628 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- policy-apex-pdp | bootstrap.servers = [kafka:9092] kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.129759003Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-pap | [2024-02-25T23:14:45.966+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... simulator | 2024-02-25 23:14:19,628 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-db-migrator | policy-apex-pdp | buffer.memory = 33554432 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.130774547Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.014804ms policy-pap | [2024-02-25T23:14:46.086+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@124ac145 simulator | 2024-02-25 23:14:19,636 INFO Session workerName=node0 policy-db-migrator | policy-apex-pdp | client.dns.lookup = use_all_dns_ips kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.137399785Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-pap | [2024-02-25T23:14:46.089+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. simulator | 2024-02-25 23:14:19,706 INFO Using GSON for REST calls policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-apex-pdp | client.id = producer-1 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.138652904Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.251939ms policy-pap | [2024-02-25T23:14:48.229+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) simulator | 2024-02-25 23:14:19,723 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} policy-db-migrator | -------------- policy-apex-pdp | compression.type = none kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.144157776Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-pap | [2024-02-25T23:14:48.245+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' simulator | 2024-02-25 23:14:19,728 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | connections.max.idle.ms = 540000 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.146713634Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=2.556778ms policy-pap | [2024-02-25T23:14:48.788+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository simulator | 2024-02-25 23:14:19,729 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @2094ms policy-db-migrator | -------------- policy-apex-pdp | delivery.timeout.ms = 120000 kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.153715777Z level=info msg="Executing migration" id="add index dashboard_permission" policy-pap | [2024-02-25T23:14:49.240+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository simulator | 2024-02-25 23:14:19,729 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4899 ms. policy-db-migrator | policy-apex-pdp | enable.idempotence = true kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.155381722Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.663875ms policy-pap | [2024-02-25T23:14:49.361+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository simulator | 2024-02-25 23:14:19,732 INFO org.onap.policy.models.simulators starting VFC simulator policy-db-migrator | policy-apex-pdp | interceptor.classes = [] grafana | logger=migrator t=2024-02-25T23:14:18.196652393Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" kafka | [2024-02-25 23:14:52,432] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | [2024-02-25T23:14:49.699+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: simulator | 2024-02-25 23:14:19,736 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-db-migrator | > upgrade 0470-pdp.sql policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-25T23:14:18.197352933Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=705.66µs kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | allow.auto.create.topics = true simulator | 2024-02-25 23:14:19,737 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- policy-apex-pdp | linger.ms = 0 grafana | logger=migrator t=2024-02-25T23:14:18.204451658Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | auto.commit.interval.ms = 5000 simulator | 2024-02-25 23:14:19,739 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | max.block.ms = 60000 grafana | logger=migrator t=2024-02-25T23:14:18.204713431Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=262.563µs kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | auto.include.jmx.reporter = true simulator | 2024-02-25 23:14:19,740 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-db-migrator | -------------- policy-apex-pdp | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-02-25T23:14:18.209360081Z level=info msg="Executing migration" id="create tag table" kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | auto.offset.reset = latest simulator | 2024-02-25 23:14:19,744 INFO Session workerName=node0 policy-db-migrator | policy-apex-pdp | max.request.size = 1048576 grafana | logger=migrator t=2024-02-25T23:14:18.210154882Z level=info msg="Migration successfully executed" id="create tag table" duration=794.611µs kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | bootstrap.servers = [kafka:9092] simulator | 2024-02-25 23:14:19,800 INFO Using GSON for REST calls policy-db-migrator | policy-apex-pdp | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:18.21881944Z level=info msg="Executing migration" id="add index tag.key_value" kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | check.crcs = true simulator | 2024-02-25 23:14:19,810 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-apex-pdp | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:18.219873417Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.055107ms kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | client.dns.lookup = use_all_dns_ips simulator | 2024-02-25 23:14:19,813 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-db-migrator | -------------- policy-apex-pdp | metric.reporters = [] grafana | logger=migrator t=2024-02-25T23:14:18.226840669Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | client.id = consumer-bd340acf-32e5-46ed-9341-bc882164db21-1 simulator | 2024-02-25 23:14:19,813 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @2179ms policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-apex-pdp | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-25T23:14:18.228130409Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.28859ms kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | client.rack = simulator | 2024-02-25 23:14:19,813 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4926 ms. policy-db-migrator | -------------- policy-apex-pdp | metrics.recording.level = INFO kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-25T23:14:18.234563744Z level=info msg="Executing migration" id="add index login_attempt.username" simulator | 2024-02-25 23:14:19,815 INFO org.onap.policy.models.simulators started policy-db-migrator | policy-apex-pdp | metrics.sample.window.ms = 30000 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-02-25T23:14:18.235815533Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.252909ms policy-db-migrator | policy-apex-pdp | partitioner.adaptive.partitioning.enable = true kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-02-25T23:14:18.243320733Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-apex-pdp | partitioner.availability.timeout.ms = 0 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-02-25T23:14:18.244467381Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.146348ms policy-db-migrator | -------------- policy-apex-pdp | partitioner.class = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-02-25T23:14:18.251620196Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-apex-pdp | partitioner.ignore.keys = false policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-25T23:14:18.268661028Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.039472ms policy-apex-pdp | receive.buffer.bytes = 32768 policy-db-migrator | policy-db-migrator | policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-25T23:14:18.275292617Z level=info msg="Executing migration" id="create login_attempt v2" policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-pap | group.id = bd340acf-32e5-46ed-9341-bc882164db21 grafana | logger=migrator t=2024-02-25T23:14:18.275807695Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=515.508µs policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-25T23:14:18.28226459Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | policy-db-migrator | policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-02-25T23:14:18.283699182Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.434672ms policy-apex-pdp | retries = 2147483647 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-25T23:14:18.288190867Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-apex-pdp | retry.backoff.ms = 100 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-25T23:14:18.289067711Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=876.354µs policy-apex-pdp | sasl.client.callback.handler.class = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-25T23:14:18.294767355Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-apex-pdp | sasl.jaas.config = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-25T23:14:18.295396355Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=628.29µs policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-25T23:14:18.303598106Z level=info msg="Executing migration" id="create user auth table" policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-25T23:14:18.304297166Z level=info msg="Migration successfully executed" id="create user auth table" duration=698.56µs policy-apex-pdp | sasl.kerberos.service.name = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:18.312477567Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-02-25T23:14:18.314432636Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.949199ms policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:18.320908392Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-apex-pdp | sasl.login.callback.handler.class = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-25T23:14:18.320964403Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=55.681µs policy-apex-pdp | sasl.login.class = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-25T23:14:18.328464544Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-apex-pdp | sasl.login.connect.timeout.ms = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-25T23:14:18.332175919Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.710145ms policy-apex-pdp | sasl.login.read.timeout.ms = null kafka | [2024-02-25 23:14:52,438] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:18.33764564Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-25T23:14:18.343476966Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.830866ms policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-25T23:14:18.346617683Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-25T23:14:18.3518719Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.254377ms policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-25T23:14:18.361453352Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:18.367119736Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.667445ms policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-25T23:14:18.373283188Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-apex-pdp | sasl.mechanism = GSSAPI kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-25T23:14:18.374255732Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=973.664µs policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-25T23:14:18.379674642Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-25T23:14:18.387795432Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.11935ms policy-apex-pdp | sasl.oauthbearer.expected.issuer = null kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.394920557Z level=info msg="Executing migration" id="create server_lock table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.395882152Z level=info msg="Migration successfully executed" id="create server_lock table" duration=962.395µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-02-25T23:14:18.401294342Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.403025087Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.731415ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-02-25T23:14:18.409105417Z level=info msg="Executing migration" id="create user auth token table" policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.410350656Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.246209ms policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.login.class = null kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.416317514Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.417361419Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.043765ms policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql grafana | logger=migrator t=2024-02-25T23:14:18.426261892Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.427944246Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.681604ms policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-25T23:14:18.436398411Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.437444218Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.045057ms policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.443420016Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.452373078Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.956232ms policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:52,439] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | > upgrade 0570-toscadatatype.sql grafana | logger=migrator t=2024-02-25T23:14:18.459842289Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" kafka | [2024-02-25 23:14:52,439] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.460835043Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=993.284µs kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | ssl.engine.factory.class = null policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) grafana | logger=migrator t=2024-02-25T23:14:18.465725916Z level=info msg="Executing migration" id="create cache_data table" kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | ssl.key.password = null policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.46667199Z level=info msg="Migration successfully executed" id="create cache_data table" duration=944.884µs kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.471896127Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.47282895Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=933.813µs policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0580-toscadatatypes.sql grafana | logger=migrator t=2024-02-25T23:14:18.478423164Z level=info msg="Executing migration" id="create short_url table v1" policy-apex-pdp | ssl.keystore.key = null kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.location = null grafana | logger=migrator t=2024-02-25T23:14:18.479106733Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=683.179µs kafka | [2024-02-25 23:14:52,634] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-apex-pdp | ssl.keystore.password = null grafana | logger=migrator t=2024-02-25T23:14:18.485968785Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-25T23:14:18.486757067Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=788.072µs kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | policy-apex-pdp | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-25T23:14:18.493245012Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | policy-apex-pdp | ssl.provider = null grafana | logger=migrator t=2024-02-25T23:14:18.493381104Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=136.512µs kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-apex-pdp | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-25T23:14:18.499657408Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | security.providers = null policy-db-migrator | -------------- policy-apex-pdp | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-02-25T23:14:18.49976447Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=108.042µs kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-25T23:14:18.503048778Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.location = null grafana | logger=migrator t=2024-02-25T23:14:18.503995862Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=946.564µs kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-02-25T23:14:18.511063777Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | [2024-02-25 23:14:52,635] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-25T23:14:18.512391256Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.327209ms kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-apex-pdp | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-02-25T23:14:18.521727005Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | -------------- policy-apex-pdp | transactional.id = null grafana | logger=migrator t=2024-02-25T23:14:18.523056654Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.329219ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-25T23:14:18.528288072Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | ssl.engine.factory.class = null kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | grafana | logger=migrator t=2024-02-25T23:14:18.528381013Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=89.731µs policy-pap | ssl.key.password = null kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-apex-pdp | [2024-02-25T23:14:53.693+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-02-25T23:14:18.531441008Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-02-25T23:14:18.532358622Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=917.514µs policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-25T23:14:18.536410592Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-pap | ssl.keystore.key = null kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902893710 grafana | logger=migrator t=2024-02-25T23:14:18.537211894Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=801.502µs policy-pap | ssl.keystore.location = null kafka | [2024-02-25 23:14:52,636] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=39c8ecad-0633-4ba4-9ca4-00222bde67e2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-02-25T23:14:18.540868088Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-pap | ssl.keystore.password = null kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|ServiceManager|main] service manager starting set alive grafana | logger=migrator t=2024-02-25T23:14:18.542138177Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.269549ms policy-pap | ssl.keystore.type = JKS kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-apex-pdp | [2024-02-25T23:14:53.710+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object grafana | logger=migrator t=2024-02-25T23:14:18.601964742Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-apex-pdp | [2024-02-25T23:14:53.713+00:00|INFO|ServiceManager|main] service manager starting topic sinks grafana | logger=migrator t=2024-02-25T23:14:18.60451318Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=2.549438ms policy-pap | ssl.provider = null kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-apex-pdp | [2024-02-25T23:14:53.713+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher grafana | logger=migrator t=2024-02-25T23:14:18.609662476Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener grafana | logger=migrator t=2024-02-25T23:14:18.613871368Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=4.206142ms policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher grafana | logger=migrator t=2024-02-25T23:14:18.617665655Z level=info msg="Executing migration" id="drop alert_definition table" policy-pap | ssl.truststore.certificates = null kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher grafana | logger=migrator t=2024-02-25T23:14:18.618405896Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=739.651µs policy-pap | ssl.truststore.location = null kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.622870392Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | ssl.truststore.password = null kafka | [2024-02-25 23:14:52,637] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.622968093Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=97.181µs policy-pap | ssl.truststore.type = JKS kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b53cde7a-481f-427a-882b-d5bcee52ac2a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-02-25T23:14:18.625512521Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b53cde7a-481f-427a-882b-d5bcee52ac2a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.626180621Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=667.72µs policy-pap | kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:53.715+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.630086929Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-pap | [2024-02-25T23:14:49.911+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:53.746+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:18.631368718Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.281019ms policy-pap | [2024-02-25T23:14:49.912+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [] policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-02-25T23:14:18.635965015Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-pap | [2024-02-25T23:14:49.912+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902889910 kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:53.749+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.637051082Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.085187ms policy-pap | [2024-02-25T23:14:49.916+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-1, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Subscribed to topic(s): policy-pdp-pap kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"0d0dd601-c190-45e7-b3e9-fc8e0be684d1","timestampMs":1708902893717,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-02-25T23:14:18.640947919Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-pap | [2024-02-25T23:14:49.917+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:18.64101514Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=67.521µs policy-pap | allow.auto.create.topics = true policy-apex-pdp | [2024-02-25T23:14:53.921+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-db-migrator | kafka | [2024-02-25 23:14:52,638] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.645180662Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | [2024-02-25T23:14:53.922+00:00|INFO|ServiceManager|main] service manager starting policy-db-migrator | kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.646202097Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.023855ms policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | [2024-02-25T23:14:53.922+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.651416625Z level=info msg="Executing migration" id="create alert_instance table" policy-pap | auto.offset.reset = latest policy-apex-pdp | [2024-02-25T23:14:53.922+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.652342628Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=925.114µs policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | [2024-02-25T23:14:53.935+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.656132745Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-pap | check.crcs = true policy-apex-pdp | [2024-02-25T23:14:53.935+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.657417283Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.323269ms policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | [2024-02-25T23:14:53.936+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-db-migrator | kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.662100723Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | client.id = consumer-policy-pap-2 policy-db-migrator | kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-02-25 23:14:52,639] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.662876794Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=775.761µs policy-pap | client.rack = policy-db-migrator | > upgrade 0660-toscaparameter.sql kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:53.935+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-02-25T23:14:18.666364545Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:54.081+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ grafana | logger=migrator t=2024-02-25T23:14:18.670479297Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.112062ms policy-pap | default.api.timeout.ms = 60000 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:54.081+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | enable.auto.commit = true kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.673911528Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-apex-pdp | [2024-02-25T23:14:54.083+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | -------------- policy-pap | exclude.internal.topics = true kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.674678309Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=766.641µs policy-apex-pdp | [2024-02-25T23:14:54.090+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] (Re-)joining group policy-db-migrator | policy-pap | fetch.max.bytes = 52428800 kafka | [2024-02-25 23:14:52,642] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:54.088+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-db-migrator | policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-25T23:14:18.679096814Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-25T23:14:18.679824745Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=727.971µs policy-apex-pdp | [2024-02-25T23:14:54.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Request joining group due to: need to re-join with the given member-id: consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-02-25T23:14:18.684800429Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-apex-pdp | [2024-02-25T23:14:54.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.717583505Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=32.775875ms policy-apex-pdp | [2024-02-25T23:14:54.109+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] (Re-)joining group policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-pap | group.instance.id = null kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-apex-pdp | [2024-02-25T23:14:54.615+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-db-migrator | -------------- policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-02-25T23:14:18.72276876Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-25T23:14:18.755085519Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=32.311039ms policy-apex-pdp | [2024-02-25T23:14:54.617+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-25T23:14:18.766716131Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-apex-pdp | [2024-02-25T23:14:56.179+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.5 - policyadmin [25/Feb/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.50.0" kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-25T23:14:18.768003191Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.29204ms policy-apex-pdp | [2024-02-25T23:14:57.117+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7', protocol='range'} policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-25T23:14:18.779122414Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-apex-pdp | [2024-02-25T23:14:57.127+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Finished assignment for group at generation 1: {consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-25T23:14:18.781333087Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=2.210173ms policy-apex-pdp | [2024-02-25T23:14:57.153+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7', protocol='range'} policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,643] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-25T23:14:18.790859599Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-apex-pdp | [2024-02-25T23:14:57.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:18.800193847Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.338247ms policy-apex-pdp | [2024-02-25T23:14:57.156+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-02-25T23:14:18.810314287Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-apex-pdp | [2024-02-25T23:14:57.168+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | > upgrade 0690-toscapolicy.sql kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:18.816198524Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.882847ms policy-apex-pdp | [2024-02-25T23:14:57.179+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2, groupId=b53cde7a-481f-427a-882b-d5bcee52ac2a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-25T23:14:18.832140319Z level=info msg="Executing migration" id="create alert_rule table" policy-apex-pdp | [2024-02-25T23:15:13.716+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-25T23:14:18.833619972Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.478653ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-25T23:14:18.842073937Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-apex-pdp | [2024-02-25T23:15:13.745+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:18.843317915Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.242417ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-25T23:14:18.847383455Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-apex-pdp | [2024-02-25T23:15:13.748+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | > upgrade 0700-toscapolicytype.sql kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-25T23:14:18.848946138Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.561763ms policy-apex-pdp | [2024-02-25T23:15:13.887+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-25T23:14:18.853342034Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-apex-pdp | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) kafka | [2024-02-25 23:14:52,644] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-25T23:14:18.854769524Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.42921ms policy-apex-pdp | [2024-02-25T23:15:13.898+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:18.860908896Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-apex-pdp | [2024-02-25T23:15:13.898+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-25T23:14:18.861011107Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=101.781µs policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-25T23:14:18.865411162Z level=info msg="Executing migration" id="add column for to alert_rule" policy-apex-pdp | [2024-02-25T23:15:13.900+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0710-toscapolicytypes.sql kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-25T23:14:18.872845392Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.43358ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-25T23:14:18.876413305Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-apex-pdp | [2024-02-25T23:15:13.915+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.883053844Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.640769ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.887851114Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-apex-pdp | [2024-02-25T23:15:13.915+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.895177223Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.324849ms policy-apex-pdp | [2024-02-25T23:15:13.922+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.900104946Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.901035739Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=930.803µs policy-apex-pdp | [2024-02-25T23:15:13.922+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | sasl.login.class = null kafka | [2024-02-25 23:14:52,645] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.904579052Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-apex-pdp | [2024-02-25T23:15:13.938+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.905649278Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.069196ms policy-apex-pdp | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.910098483Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-apex-pdp | [2024-02-25T23:15:13.941+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.916211744Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.110061ms policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.920571349Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-apex-pdp | [2024-02-25T23:15:13.952+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.926628648Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.056419ms policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.93016113Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-apex-pdp | [2024-02-25T23:15:13.952+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.931152095Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=990.225µs policy-apex-pdp | [2024-02-25T23:15:14.000+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.935600581Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-apex-pdp | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:18.941484378Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.882857ms policy-apex-pdp | [2024-02-25T23:15:14.002+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.072205228Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-02-25 23:14:52,646] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.077407326Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.206698ms policy-apex-pdp | [2024-02-25T23:15:14.009+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-02-25 23:14:52,647] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.080903791Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-25 23:14:52,647] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.080956512Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=53.361µs policy-apex-pdp | [2024-02-25T23:15:14.011+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:52,648] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-apex-pdp | [2024-02-25T23:15:56.091+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.5 - policyadmin [25/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10654 "-" "Prometheus/2.50.0" grafana | logger=migrator t=2024-02-25T23:14:19.088701619Z level=info msg="Executing migration" id="create alert_rule_version table" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:52,651] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.089490463Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=789.415µs policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.097437933Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.098229978Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=792.035µs policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.103048609Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.103849563Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=803.724µs policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.109801616Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-db-migrator | policy-pap | security.providers = null kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.109855837Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=54.391µs policy-db-migrator | policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.114609236Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-pap | session.timeout.ms = 45000 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.121397585Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.786969ms policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.138805152Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.146235253Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.436301ms policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.152900918Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-25 23:14:52,653] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.158694957Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.793599ms policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.162964248Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-pap | ssl.engine.factory.class = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.170116682Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.151614ms policy-db-migrator | -------------- policy-pap | ssl.key.password = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.179668621Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.187854276Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=8.184274ms policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.194852898Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.194905829Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=53.391µs policy-db-migrator | policy-pap | ssl.keystore.location = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.19924913Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-pap | ssl.keystore.password = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.200738029Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.483878ms policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.210919641Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.218643386Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.721825ms policy-db-migrator | -------------- policy-pap | ssl.provider = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.226186499Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.226364253Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=184.464µs policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-25 23:14:52,654] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.231843916Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-pap | ssl.truststore.certificates = null kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.240309357Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.47843ms policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.249185325Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.truststore.password = null kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.251253374Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=2.069129ms policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.258398039Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.265572495Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.179296ms policy-db-migrator | policy-pap | kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.270022299Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.270554619Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=530.93µs policy-db-migrator | -------------- policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.278036041Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902889923 kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.2790673Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.030489ms policy-db-migrator | -------------- policy-pap | [2024-02-25T23:14:49.923+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.289111071Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-db-migrator | kafka | [2024-02-25 23:14:52,655] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-25T23:14:50.289+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json grafana | logger=migrator t=2024-02-25T23:14:19.297744064Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.633423ms policy-db-migrator | kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-25T23:14:50.445+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning grafana | logger=migrator t=2024-02-25T23:14:19.307104151Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-25T23:14:50.714+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@f287a4e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@3879feec, org.springframework.security.web.context.SecurityContextHolderFilter@ce0bbd5, org.springframework.security.web.header.HeaderWriterFilter@1f7557fe, org.springframework.security.web.authentication.logout.LogoutFilter@7120daa6, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@5e198c40, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7c359808, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@16361e61, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@71d2261e, org.springframework.security.web.access.ExceptionTranslationFilter@4ac0d49, org.springframework.security.web.access.intercept.AuthorizationFilter@280c3dc0] grafana | logger=migrator t=2024-02-25T23:14:19.307994697Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=900.246µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-25T23:14:51.629+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' grafana | logger=migrator t=2024-02-25T23:14:19.318391595Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-25T23:14:51.734+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-02-25T23:14:19.319532616Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.140491ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-25T23:14:51.759+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' grafana | logger=migrator t=2024-02-25T23:14:19.328919264Z level=info msg="Executing migration" id="create alert_image table" policy-db-migrator | kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-25T23:14:51.778+00:00|INFO|ServiceManager|main] Policy PAP starting grafana | logger=migrator t=2024-02-25T23:14:19.329540526Z level=info msg="Migration successfully executed" id="create alert_image table" duration=621.601µs policy-db-migrator | policy-pap | [2024-02-25T23:14:51.779+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.337958985Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | [2024-02-25T23:14:51.779+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0820-toscatrigger.sql grafana | logger=migrator t=2024-02-25T23:14:19.339087406Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.132631ms policy-pap | [2024-02-25T23:14:51.780+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.344095011Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-pap | [2024-02-25T23:14:51.780+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-25T23:14:19.344199203Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=105.242µs policy-pap | [2024-02-25T23:14:51.781+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-25T23:14:51.781+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher kafka | [2024-02-25 23:14:52,656] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.3477615Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-db-migrator | policy-pap | [2024-02-25T23:14:51.785+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bd340acf-32e5-46ed-9341-bc882164db21, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2a525f88 kafka | [2024-02-25 23:14:52,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.348512075Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=750.165µs policy-db-migrator | policy-pap | [2024-02-25T23:14:51.797+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bd340acf-32e5-46ed-9341-bc882164db21, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-02-25 23:14:52,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.353106761Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | [2024-02-25T23:14:51.797+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-02-25 23:14:52,657] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.354101681Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=994.63µs policy-db-migrator | -------------- policy-pap | allow.auto.create.topics = true kafka | [2024-02-25 23:14:52,657] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.358445893Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-02-25 23:14:52,663] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.359181117Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-25 23:14:52,664] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.364048538Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-db-migrator | policy-pap | auto.offset.reset = latest kafka | [2024-02-25 23:14:52,664] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.364853284Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=804.736µs policy-db-migrator | policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-25 23:14:52,664] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.370278847Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | check.crcs = true kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.37205819Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.784373ms policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-pap | client.id = consumer-bd340acf-32e5-46ed-9341-bc882164db21-3 grafana | logger=migrator t=2024-02-25T23:14:19.377075326Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | client.rack = grafana | logger=migrator t=2024-02-25T23:14:19.385325671Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.250166ms kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-25T23:14:19.389032541Z level=info msg="Executing migration" id="create library_element table v1" kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-02-25T23:14:19.389873508Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=838.677µs kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-02-25T23:14:19.397114035Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-02-25T23:14:19.398060002Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=947.977µs kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-02-25T23:14:19.401333025Z level=info msg="Executing migration" id="create library_element_connection table v1" kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-25T23:14:19.402085579Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=752.314µs kafka | [2024-02-25 23:14:52,665] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-25T23:14:19.405700577Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | group.id = bd340acf-32e5-46ed-9341-bc882164db21 grafana | logger=migrator t=2024-02-25T23:14:19.407620384Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.919237ms kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-25T23:14:19.414102376Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-02-25T23:14:19.415252919Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.149772ms kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-25T23:14:19.419316825Z level=info msg="Executing migration" id="increase max description length to 2048" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-25T23:14:19.419346535Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=32.95µs kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-25T23:14:19.433405392Z level=info msg="Executing migration" id="alter library_element model to mediumtext" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-25T23:14:19.433500493Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=97.261µs kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-25T23:14:19.488189109Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-25T23:14:19.488564136Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=378.187µs kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) grafana | logger=migrator t=2024-02-25T23:14:19.495009048Z level=info msg="Executing migration" id="create data_keys table" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:19.495901354Z level=info msg="Migration successfully executed" id="create data_keys table" duration=896.867µs kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-02-25T23:14:19.502206804Z level=info msg="Executing migration" id="create secrets table" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:19.502800125Z level=info msg="Migration successfully executed" id="create secrets table" duration=593.351µs kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-25T23:14:19.506367933Z level=info msg="Executing migration" id="rename data_keys name column to id" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-25T23:14:19.555542644Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=49.164471ms kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-25T23:14:19.563958643Z level=info msg="Executing migration" id="add name column into data_keys" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:19.571910143Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.95831ms kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-25T23:14:19.576158964Z level=info msg="Executing migration" id="copy data_keys id column values into name" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-25T23:14:19.576311676Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=153.023µs kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-25T23:14:19.579643869Z level=info msg="Executing migration" id="rename data_keys name column to label" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-25T23:14:19.622504761Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=42.861332ms kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:19.628569766Z level=info msg="Executing migration" id="rename data_keys id column back to name" kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-25T23:14:19.680319455Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=51.748799ms policy-db-migrator | kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-25T23:14:19.684393143Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-02-25 23:14:52,666] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-25T23:14:19.684951533Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=553.16µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-02-25T23:14:19.688755835Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-02-25T23:14:19.689555629Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=798.964µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-25T23:14:19.695819769Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-db-migrator | kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-02-25T23:14:19.696272437Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=457.798µs policy-db-migrator | kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-02-25T23:14:19.701892064Z level=info msg="Executing migration" id="create permission table" policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-02-25T23:14:19.702725709Z level=info msg="Migration successfully executed" id="create permission table" duration=833.295µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-02-25T23:14:19.712148288Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.712947333Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=799.115µs kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.717235704Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.718284864Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.04835ms kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-02-25T23:14:19.723446662Z level=info msg="Executing migration" id="create role table" kafka | [2024-02-25 23:14:52,667] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.724218587Z level=info msg="Migration successfully executed" id="create role table" duration=774.174µs kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-02-25T23:14:19.728769472Z level=info msg="Executing migration" id="add column display_name" kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.734502721Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.732429ms kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.738597118Z level=info msg="Executing migration" id="add column group_name" kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.746285114Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.687316ms kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-02-25T23:14:19.750792399Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.752036543Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.249364ms kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-02-25T23:14:19.755906826Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.757037618Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.130642ms kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.760904591Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.762045212Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.140451ms kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-02-25T23:14:19.769448132Z level=info msg="Executing migration" id="create team role table" policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.77140845Z level=info msg="Migration successfully executed" id="create team role table" duration=1.957568ms policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-25 23:14:52,718] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-02-25T23:14:19.779094645Z level=info msg="Executing migration" id="add index team_role.org_id" policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.780259667Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.164422ms policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.784564278Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-pap | security.providers = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.785574918Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.01422ms policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-02-25T23:14:19.791301466Z level=info msg="Executing migration" id="add index team_role.team_id" policy-pap | session.timeout.ms = 45000 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.792091791Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=790.265µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-25T23:14:19.795931903Z level=info msg="Executing migration" id="create user role table" policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.797366411Z level=info msg="Migration successfully executed" id="create user role table" duration=1.430058ms policy-pap | ssl.cipher.suites = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.80207514Z level=info msg="Executing migration" id="add index user_role.org_id" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.803942635Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.867615ms policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-02-25T23:14:19.811785354Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-pap | ssl.engine.factory.class = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.812720821Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=935.997µs policy-pap | ssl.key.password = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-25T23:14:19.819132683Z level=info msg="Executing migration" id="add index user_role.user_id" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:19.819955108Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=823.305µs policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.827309208Z level=info msg="Executing migration" id="create builtin role table" policy-pap | ssl.keystore.key = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:19.82850371Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.195192ms policy-pap | ssl.keystore.location = null kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:19.835497193Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-02-25T23:14:19.83854148Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=3.048657ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-25T23:14:19.843994924Z level=info msg="Executing migration" id="add index builtin_role.name" policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-25T23:14:19.845058453Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.063309ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-25T23:14:19.851819371Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-25T23:14:19.86332036Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.501598ms policy-db-migrator | kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-02-25T23:14:19.95950866Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-25T23:14:19.961432227Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.928127ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-25T23:14:19.967231986Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-25T23:14:19.968303896Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.07417ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-25T23:14:19.973569926Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-25T23:14:19.975150965Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.580449ms policy-db-migrator | kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | grafana | logger=migrator t=2024-02-25T23:14:19.98169057Z level=info msg="Executing migration" id="add unique index role.uid" policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-02-25T23:14:19.983441362Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.750422ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-25T23:14:19.989229102Z level=info msg="Executing migration" id="create seed assignment table" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891805 grafana | logger=migrator t=2024-02-25T23:14:19.990435785Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.206013ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | [2024-02-25T23:14:51.805+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-02-25T23:14:19.995192676Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-db-migrator | kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-02-25T23:14:51.806+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher grafana | logger=migrator t=2024-02-25T23:14:19.997245194Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=2.046198ms policy-db-migrator | kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-02-25T23:14:51.806+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f430ca1f-0b14-4277-b999-dfdb1b16d100, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3f2ab6ec grafana | logger=migrator t=2024-02-25T23:14:20.004580493Z level=info msg="Executing migration" id="add column hidden to role table" policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-02-25T23:14:51.806+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f430ca1f-0b14-4277-b999-dfdb1b16d100, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-25T23:14:20.012461133Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.8807ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-pap | [2024-02-25T23:14:51.807+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-02-25T23:14:20.017422337Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,719] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-02-25T23:14:20.028238442Z level=info msg="Migration successfully executed" id="permission kind migration" duration=10.818585ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,721] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-02-25T23:14:20.035330107Z level=info msg="Executing migration" id="permission attribute migration" policy-db-migrator | kafka | [2024-02-25 23:14:52,721] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-02-25T23:14:20.043786018Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.455081ms policy-db-migrator | kafka | [2024-02-25 23:14:52,780] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-02-25T23:14:20.049036058Z level=info msg="Executing migration" id="permission identifier migration" policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-02-25 23:14:52,796] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-02-25T23:14:20.057527199Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.489501ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,799] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-pap | check.crcs = true grafana | logger=migrator t=2024-02-25T23:14:20.06334981Z level=info msg="Executing migration" id="add permission identifier index" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,800] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-02-25T23:14:20.064314518Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=964.078µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,802] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | client.id = consumer-policy-pap-4 grafana | logger=migrator t=2024-02-25T23:14:20.068420787Z level=info msg="Executing migration" id="create query_history table v1" policy-db-migrator | kafka | [2024-02-25 23:14:52,817] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | client.rack = grafana | logger=migrator t=2024-02-25T23:14:20.06966805Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.248824ms policy-db-migrator | kafka | [2024-02-25 23:14:52,818] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-25T23:14:20.076212015Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | [2024-02-25 23:14:52,818] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-02-25T23:14:20.077312405Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.1004ms policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,818] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-02-25T23:14:20.084520491Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,818] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-02-25T23:14:20.084593343Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=73.362µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,831] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-02-25T23:14:20.088556339Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | kafka | [2024-02-25 23:14:52,832] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-25T23:14:20.08859481Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=39.191µs policy-db-migrator | kafka | [2024-02-25 23:14:52,833] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-25T23:14:20.092207448Z level=info msg="Executing migration" id="teams permissions migration" policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-02-25 23:14:52,833] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-02-25T23:14:20.092679917Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=472.629µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,833] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-25T23:14:20.09965029Z level=info msg="Executing migration" id="dashboard permissions" policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,843] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-02-25T23:14:20.100229411Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=579.561µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,844] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-25T23:14:20.105507271Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | kafka | [2024-02-25 23:14:52,845] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-25T23:14:20.106140643Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=633.392µs policy-db-migrator | kafka | [2024-02-25 23:14:52,845] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-25T23:14:20.115140584Z level=info msg="Executing migration" id="drop managed folder create actions" policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-02-25 23:14:52,845] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-25T23:14:20.115338307Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=197.943µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,856] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-25T23:14:20.121504095Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-25 23:14:52,857] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-25T23:14:20.121821281Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=317.546µs policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,858] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:20.126661473Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | kafka | [2024-02-25 23:14:52,858] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-02-25T23:14:20.127425968Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=763.654µs policy-db-migrator | kafka | [2024-02-25 23:14:52,858] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-25T23:14:20.132427943Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-02-25 23:14:52,868] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-25T23:14:20.133797988Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.369085ms kafka | [2024-02-25 23:14:52,869] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-25T23:14:20.139433266Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-02-25 23:14:52,869] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-25T23:14:20.14863034Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.196065ms kafka | [2024-02-25 23:14:52,870] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:20.153712277Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-02-25 23:14:52,872] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-25T23:14:20.153830409Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=118.952µs kafka | [2024-02-25 23:14:52,883] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-25T23:14:20.157383426Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-02-25 23:14:52,884] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0100-pdp.sql policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-25T23:14:20.158351975Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=968.289µs kafka | [2024-02-25 23:14:52,884] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-25T23:14:20.162135887Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-02-25 23:14:52,884] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-25T23:14:20.16334173Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.203313ms kafka | [2024-02-25 23:14:52,884] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-25T23:14:20.173092505Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-02-25 23:14:52,894] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-25T23:14:20.176193134Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=3.106549ms kafka | [2024-02-25 23:14:52,894] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-25T23:14:20.186167114Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-02-25 23:14:52,895] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-25T23:14:20.201912722Z level=info msg="Migration successfully executed" id="add correlation config column" duration=15.755178ms kafka | [2024-02-25 23:14:52,895] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-02-25T23:14:20.21544373Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" kafka | [2024-02-25 23:14:52,895] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-02-25T23:14:20.217303225Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.862975ms kafka | [2024-02-25 23:14:52,904] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-25T23:14:20.224651284Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" kafka | [2024-02-25 23:14:52,906] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-02-25T23:14:20.225509162Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=857.707µs kafka | [2024-02-25 23:14:52,906] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-02-25T23:14:20.233073305Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" kafka | [2024-02-25 23:14:52,906] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-02-25T23:14:20.295867728Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=62.794093ms kafka | [2024-02-25 23:14:52,906] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-02-25T23:14:20.30439624Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-02-25 23:14:52,918] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-25T23:14:20.306191295Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.799155ms kafka | [2024-02-25 23:14:52,920] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-02-25T23:14:20.407879877Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-02-25 23:14:52,920] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-02-25T23:14:20.409895215Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.015168ms kafka | [2024-02-25 23:14:52,920] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-25 23:14:52,920] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-02-25T23:14:20.414421081Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-25 23:14:52,931] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.415643975Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.224274ms policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:52,932] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-02-25T23:14:20.422310831Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:52,932] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.424250798Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.939407ms policy-pap | sasl.mechanism = GSSAPI kafka | [2024-02-25 23:14:52,932] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.432474414Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | kafka | [2024-02-25 23:14:52,932] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.432726379Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=252.625µs policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | [2024-02-25 23:14:52,940] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.436820366Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql kafka | [2024-02-25 23:14:52,941] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.440559498Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=3.737942ms policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- kafka | [2024-02-25 23:14:52,941] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.450327743Z level=info msg="Executing migration" id="add provisioning column" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num kafka | [2024-02-25 23:14:52,941] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.459016878Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.685805ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.464840779Z level=info msg="Executing migration" id="create entity_events table" kafka | [2024-02-25 23:14:52,942] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.465534302Z level=info msg="Migration successfully executed" id="create entity_events table" duration=696.953µs kafka | [2024-02-25 23:14:52,949] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.470526127Z level=info msg="Executing migration" id="create dashboard public config v1" kafka | [2024-02-25 23:14:52,950] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) grafana | logger=migrator t=2024-02-25T23:14:20.472101597Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.57549ms kafka | [2024-02-25 23:14:52,950] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.47908715Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-02-25 23:14:52,950] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.479582729Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-02-25 23:14:52,950] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.483126587Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-25 23:14:52,960] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 0150-pdpstatistics.sql grafana | logger=migrator t=2024-02-25T23:14:20.483597556Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-25 23:14:52,961] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.487887226Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-02-25 23:14:52,961] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL grafana | logger=migrator t=2024-02-25T23:14:20.488652852Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=765.406µs kafka | [2024-02-25 23:14:52,961] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.495357519Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-02-25 23:14:52,961] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.496691065Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.332466ms kafka | [2024-02-25 23:14:52,970] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.505287298Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" kafka | [2024-02-25 23:14:52,972] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-25T23:14:20.506381198Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.09381ms kafka | [2024-02-25 23:14:52,972] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.512201849Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-25 23:14:52,973] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.key.password = null policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME grafana | logger=migrator t=2024-02-25T23:14:20.513358982Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.156172ms kafka | [2024-02-25 23:14:52,973] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.520005427Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-02-25 23:14:52,987] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.521051777Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.04591ms kafka | [2024-02-25 23:14:52,988] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.keystore.key = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.527717994Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-02-25 23:14:52,988] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | ssl.keystore.location = null policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-25T23:14:20.529456256Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.746232ms kafka | [2024-02-25 23:14:52,988] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.keystore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.5369921Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-02-25 23:14:52,988] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | UPDATE jpapdpstatistics_enginestats a grafana | logger=migrator t=2024-02-25T23:14:20.538996878Z level=info msg="Migration successfully executed" id="Drop public config table" duration=2.004178ms kafka | [2024-02-25 23:14:52,999] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | JOIN pdpstatistics b grafana | logger=migrator t=2024-02-25T23:14:20.547114243Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-02-25 23:14:52,999] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.provider = null policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp grafana | logger=migrator t=2024-02-25T23:14:20.548181443Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.067269ms kafka | [2024-02-25 23:14:52,999] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | SET a.id = b.id grafana | logger=migrator t=2024-02-25T23:14:20.555058483Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-02-25 23:14:52,999] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.556515891Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.457988ms kafka | [2024-02-25 23:14:52,999] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.560618549Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-02-25 23:14:53,008] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.truststore.location = null policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.561477676Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=859.117µs kafka | [2024-02-25 23:14:53,009] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | ssl.truststore.password = null policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-25T23:14:20.567754505Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-02-25 23:14:53,009] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.568897646Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.143991ms kafka | [2024-02-25 23:14:53,009] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-02-25T23:14:20.573497084Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-02-25 23:14:53,009] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.610991596Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=37.494692ms kafka | [2024-02-25 23:14:53,016] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-02-25T23:14:51.811+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.618272944Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-02-25 23:14:53,017] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-02-25T23:14:51.811+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:20.62489114Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.617146ms kafka | [2024-02-25 23:14:53,017] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891811 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-02-25T23:14:20.633751399Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-02-25 23:14:53,017] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:20.642695899Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.94427ms policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-02-25 23:14:53,017] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) grafana | logger=migrator t=2024-02-25T23:14:20.649581319Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-02-25 23:14:53,027] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-02-25T23:14:51.812+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f430ca1f-0b14-4277-b999-dfdb1b16d100, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-25T23:14:20.650055948Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=480.289µs kafka | [2024-02-25 23:14:53,030] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-02-25T23:14:51.813+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=bd340acf-32e5-46ed-9341-bc882164db21, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-25T23:14:20.656285527Z level=info msg="Executing migration" id="add share column" kafka | [2024-02-25 23:14:53,030] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-02-25T23:14:51.813+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00b26071-c70f-48c7-b06a-57ab45326f51, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-02-25T23:14:20.666129834Z level=info msg="Migration successfully executed" id="add share column" duration=9.841887ms kafka | [2024-02-25 23:14:53,030] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-pap | [2024-02-25T23:14:51.846+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-02-25 23:14:53,031] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.670570939Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | -------------- policy-pap | acks = -1 kafka | [2024-02-25 23:14:53,039] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.670759672Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=187.993µs policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-25 23:14:53,039] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.678423137Z level=info msg="Executing migration" id="create file table" policy-db-migrator | -------------- policy-pap | batch.size = 16384 kafka | [2024-02-25 23:14:53,039] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.680486047Z level=info msg="Migration successfully executed" id="create file table" duration=2.06941ms policy-db-migrator | policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-25 23:14:53,039] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.684938521Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | policy-pap | buffer.memory = 33554432 kafka | [2024-02-25 23:14:53,040] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.686219495Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.280854ms policy-db-migrator | > upgrade 0210-sequence.sql policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-02-25 23:14:53,048] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.690239382Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | -------------- policy-pap | client.id = producer-1 kafka | [2024-02-25 23:14:53,049] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.691533056Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.218003ms policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | compression.type = none kafka | [2024-02-25 23:14:53,049] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.69908961Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | -------------- policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-25 23:14:53,049] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.699853725Z level=info msg="Migration successfully executed" id="create file_meta table" duration=764.145µs policy-db-migrator | policy-pap | delivery.timeout.ms = 120000 kafka | [2024-02-25 23:14:53,049] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.706590653Z level=info msg="Executing migration" id="file table idx: path key" policy-db-migrator | policy-pap | enable.idempotence = true kafka | [2024-02-25 23:14:53,060] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.707875717Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.284544ms policy-db-migrator | > upgrade 0220-sequence.sql policy-pap | interceptor.classes = [] kafka | [2024-02-25 23:14:53,060] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.713442233Z level=info msg="Executing migration" id="set path collation in file table" policy-db-migrator | -------------- policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-25 23:14:53,060] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.713538745Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=100.632µs policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-pap | linger.ms = 0 kafka | [2024-02-25 23:14:53,060] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.72169838Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-db-migrator | -------------- policy-pap | max.block.ms = 60000 kafka | [2024-02-25 23:14:53,060] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.721779151Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=81.531µs policy-db-migrator | policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-02-25 23:14:53,068] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.726586843Z level=info msg="Executing migration" id="managed permissions migration" policy-db-migrator | policy-pap | max.request.size = 1048576 kafka | [2024-02-25 23:14:53,069] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.727158764Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=568.501µs policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-pap | metadata.max.age.ms = 300000 kafka | [2024-02-25 23:14:53,069] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.732041727Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" policy-db-migrator | -------------- policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-02-25 23:14:53,069] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.73224387Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=202.123µs policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-pap | metric.reporters = [] kafka | [2024-02-25 23:14:53,069] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.737070362Z level=info msg="Executing migration" id="RBAC action name migrator" policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 kafka | [2024-02-25 23:14:53,077] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.73799307Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=922.708µs policy-db-migrator | policy-pap | metrics.recording.level = INFO kafka | [2024-02-25 23:14:53,077] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.74327206Z level=info msg="Executing migration" id="Add UID column to playlist" policy-db-migrator | policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-02-25 23:14:53,077] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.752413793Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.129373ms policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-02-25 23:14:53,077] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.819111581Z level=info msg="Executing migration" id="Update uid column values in playlist" policy-db-migrator | -------------- policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-02-25 23:14:53,078] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.819620241Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=516.59µs policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-pap | partitioner.class = null kafka | [2024-02-25 23:14:53,086] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.827619793Z level=info msg="Executing migration" id="Add index for uid in playlist" policy-db-migrator | -------------- policy-pap | partitioner.ignore.keys = false kafka | [2024-02-25 23:14:53,086] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.828763354Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.146051ms policy-db-migrator | policy-pap | receive.buffer.bytes = 32768 kafka | [2024-02-25 23:14:53,086] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.8337599Z level=info msg="Executing migration" id="update group index for alert rules" policy-db-migrator | policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-25T23:14:20.834237648Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=478.809µs kafka | [2024-02-25 23:14:53,086] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-25T23:14:20.838162402Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-02-25 23:14:53,086] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | request.timeout.ms = 30000 kafka | [2024-02-25 23:14:53,094] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | retries = 2147483647 kafka | [2024-02-25 23:14:53,095] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 kafka | [2024-02-25 23:14:53,095] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.83852405Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=361.768µs policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null kafka | [2024-02-25 23:14:53,095] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.84330885Z level=info msg="Executing migration" id="admin only folder/dashboard permission" policy-db-migrator | policy-pap | sasl.jaas.config = null kafka | [2024-02-25 23:14:53,095] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.843996154Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=684.554µs policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-02-25 23:14:53,107] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.848671452Z level=info msg="Executing migration" id="add action column to seed_assignment" policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-25 23:14:53,107] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.858068182Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.39606ms policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | sasl.kerberos.service.name = null kafka | [2024-02-25 23:14:53,107] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.863627737Z level=info msg="Executing migration" id="add scope column to seed_assignment" policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-02-25 23:14:53,107] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.872793351Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.164954ms policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-25 23:14:53,108] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:20.878417998Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-25 23:14:53,115] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:20.879232613Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=814.835µs policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | sasl.login.class = null kafka | [2024-02-25 23:14:53,115] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:20.885908911Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-25 23:14:53,115] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:20.998639513Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=112.731702ms policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-25 23:14:53,115] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:21.004100926Z level=info msg="Executing migration" id="add unique index builtin_role_name back" policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-25 23:14:53,115] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:21.005318449Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.217263ms policy-db-migrator | policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-02-25 23:14:53,121] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:21.009993078Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" policy-db-migrator | policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-25 23:14:53,122] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:21.011256961Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.263753ms policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-25 23:14:53,122] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:21.017358896Z level=info msg="Executing migration" id="add primary key to seed_assigment" policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:53,122] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:21.055443927Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=38.081811ms policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:53,122] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:21.061332428Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI kafka | [2024-02-25 23:14:53,130] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-25T23:14:21.061596443Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=260.615µs policy-db-migrator | policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-25 23:14:53,130] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-25T23:14:21.066909114Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-02-25 23:14:53,130] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:21.067413373Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=506.699µs policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-02-25 23:14:53,130] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:21.075903634Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-25 23:14:53,130] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.076291082Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=387.668µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:53,137] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.080271287Z level=info msg="Executing migration" id="create folder table" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:53,138] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.081408998Z level=info msg="Migration successfully executed" id="create folder table" duration=1.138241ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-25 23:14:53,138] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE IF EXISTS toscaproperty grafana | logger=migrator t=2024-02-25T23:14:21.08626766Z level=info msg="Executing migration" id="Add index for parent_uid" policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-25 23:14:53,138] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.087452812Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.184622ms policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-25 23:14:53,138] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.093695131Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-25 23:14:53,145] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.094900773Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.205252ms policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-25 23:14:53,146] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql grafana | logger=migrator t=2024-02-25T23:14:21.099232806Z level=info msg="Executing migration" id="Update folder title length" policy-pap | security.providers = null kafka | [2024-02-25 23:14:53,146] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.099275796Z level=info msg="Migration successfully executed" id="Update folder title length" duration=44.51µs policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-25 23:14:53,146] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY grafana | logger=migrator t=2024-02-25T23:14:21.1063779Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-02-25 23:14:53,146] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.108255856Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.878026ms policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-25 23:14:53,153] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.113617238Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" policy-pap | ssl.cipher.suites = null kafka | [2024-02-25 23:14:53,154] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.115390051Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.772763ms policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-25 23:14:53,154] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) grafana | logger=migrator t=2024-02-25T23:14:21.119250694Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-25 23:14:53,154] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.120381595Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.130551ms policy-pap | ssl.engine.factory.class = null kafka | [2024-02-25 23:14:53,154] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.128977998Z level=info msg="Executing migration" id="Sync dashboard and folder table" policy-pap | ssl.key.password = null kafka | [2024-02-25 23:14:53,164] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.129540059Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=558.641µs policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-25 23:14:53,164] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql grafana | logger=migrator t=2024-02-25T23:14:21.135491761Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-25 23:14:53,165] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.135807547Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=313.856µs policy-pap | ssl.keystore.key = null kafka | [2024-02-25 23:14:53,165] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-02-25T23:14:21.139548418Z level=info msg="Executing migration" id="create anon_device table" policy-pap | ssl.keystore.location = null kafka | [2024-02-25 23:14:53,165] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(9kyEG5R7S_ymSJoFuQGdeg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.140418904Z level=info msg="Migration successfully executed" id="create anon_device table" duration=870.626µs policy-pap | ssl.keystore.password = null kafka | [2024-02-25 23:14:53,173] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.146922068Z level=info msg="Executing migration" id="add unique index anon_device.device_id" policy-pap | ssl.keystore.type = JKS kafka | [2024-02-25 23:14:53,174] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.14811796Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.195552ms policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-25 23:14:53,174] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) grafana | logger=migrator t=2024-02-25T23:14:21.153673165Z level=info msg="Executing migration" id="add index anon_device.updated_at" policy-pap | ssl.provider = null kafka | [2024-02-25 23:14:53,174] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.155412018Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.738323ms policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-25 23:14:53,174] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.159614987Z level=info msg="Executing migration" id="create signing_key table" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-25 23:14:53,180] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.160404323Z level=info msg="Migration successfully executed" id="create signing_key table" duration=788.646µs policy-pap | ssl.truststore.certificates = null kafka | [2024-02-25 23:14:53,181] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-02-25T23:14:21.164976719Z level=info msg="Executing migration" id="add unique index signing_key.key_id" policy-pap | ssl.truststore.location = null kafka | [2024-02-25 23:14:53,181] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.166170262Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.193573ms policy-pap | ssl.truststore.password = null kafka | [2024-02-25 23:14:53,181] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT grafana | logger=migrator t=2024-02-25T23:14:21.173118663Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" policy-pap | ssl.truststore.type = JKS kafka | [2024-02-25 23:14:53,181] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.174608291Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.489458ms policy-pap | transaction.timeout.ms = 60000 kafka | [2024-02-25 23:14:53,190] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.240830664Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" policy-pap | transactional.id = null kafka | [2024-02-25 23:14:53,191] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.241524756Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=695.112µs policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-25 23:14:53,191] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0100-upgrade.sql grafana | logger=migrator t=2024-02-25T23:14:21.250489327Z level=info msg="Executing migration" id="Add folder_uid for dashboard" policy-pap | kafka | [2024-02-25 23:14:53,191] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.2643804Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.892023ms policy-pap | [2024-02-25T23:14:51.861+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | [2024-02-25 23:14:53,191] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | select 'upgrade to 1100 completed' as msg grafana | logger=migrator t=2024-02-25T23:14:21.268799982Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-02-25 23:14:53,197] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.269445424Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=646.422µs policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-02-25 23:14:53,198] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.272634046Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891878 kafka | [2024-02-25 23:14:53,198] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-db-migrator | msg grafana | logger=migrator t=2024-02-25T23:14:21.274043182Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.408956ms policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=00b26071-c70f-48c7-b06a-57ab45326f51, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-02-25 23:14:53,198] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | upgrade to 1100 completed grafana | logger=migrator t=2024-02-25T23:14:21.279866992Z level=info msg="Executing migration" id="create sso_setting table" policy-pap | [2024-02-25T23:14:51.878+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e8b20985-7a16-4249-8f92-c0d245467f15, alive=false, publisher=null]]: starting kafka | [2024-02-25 23:14:53,198] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-25T23:14:21.281297729Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.430157ms policy-pap | [2024-02-25T23:14:51.879+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-02-25 23:14:53,209] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql grafana | logger=migrator t=2024-02-25T23:14:21.289169398Z level=info msg="Executing migration" id="copy kvstore migration status to each org" policy-pap | acks = -1 kafka | [2024-02-25 23:14:53,210] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-25T23:14:21.290114106Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=945.388µs policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-25 23:14:53,210] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME grafana | logger=migrator t=2024-02-25T23:14:21.295823914Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" policy-pap | batch.size = 16384 policy-db-migrator | -------------- kafka | [2024-02-25 23:14:53,210] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-25T23:14:21.296214671Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=391.187µs policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | kafka | [2024-02-25 23:14:53,210] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-25T23:14:21.300090894Z level=info msg="migrations completed" performed=526 skipped=0 duration=5.670914538s policy-pap | buffer.memory = 33554432 policy-db-migrator | kafka | [2024-02-25 23:14:53,216] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=sqlstore t=2024-02-25T23:14:21.312978378Z level=info msg="Created default admin" user=admin policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-02-25 23:14:53,217] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=sqlstore t=2024-02-25T23:14:21.313294464Z level=info msg="Created default organization" policy-pap | client.id = producer-2 policy-db-migrator | -------------- kafka | [2024-02-25 23:14:53,217] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) grafana | logger=secrets t=2024-02-25T23:14:21.318635165Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 policy-pap | compression.type = none policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-02-25 23:14:53,217] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=plugin.store t=2024-02-25T23:14:21.337100015Z level=info msg="Loading plugins..." policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | -------------- kafka | [2024-02-25 23:14:53,217] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | grafana | logger=local.finder t=2024-02-25T23:14:21.378954386Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-02-25 23:14:53,224] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | enable.idempotence = true policy-db-migrator | -------------- grafana | logger=plugin.store t=2024-02-25T23:14:21.379020628Z level=info msg="Plugins loaded" count=55 duration=41.921913ms kafka | [2024-02-25 23:14:53,224] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | interceptor.classes = [] policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=query_data t=2024-02-25T23:14:21.389820011Z level=info msg="Query Service initialization" kafka | [2024-02-25 23:14:53,224] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- grafana | logger=live.push_http t=2024-02-25T23:14:21.397047458Z level=info msg="Live Push Gateway initialization" kafka | [2024-02-25 23:14:53,224] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | linger.ms = 0 policy-db-migrator | grafana | logger=ngalert.migration t=2024-02-25T23:14:21.402848078Z level=info msg=Starting kafka | [2024-02-25 23:14:53,224] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | max.block.ms = 60000 policy-db-migrator | grafana | logger=ngalert.migration orgID=1 t=2024-02-25T23:14:21.40347573Z level=info msg="Migrating alerts for organisation" kafka | [2024-02-25 23:14:53,231] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | > upgrade 0120-audit_sequence.sql grafana | logger=ngalert.migration orgID=1 t=2024-02-25T23:14:21.404086612Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-02-25 23:14:53,232] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | max.request.size = 1048576 policy-db-migrator | -------------- grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-25T23:14:21.405514328Z level=info msg="Completed legacy migration" kafka | [2024-02-25 23:14:53,232] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=infra.usagestats.collector t=2024-02-25T23:14:21.443401256Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-02-25 23:14:53,232] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | -------------- grafana | logger=provisioning.datasources t=2024-02-25T23:14:21.445925352Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-02-25 23:14:53,232] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | metric.reporters = [] policy-db-migrator | grafana | logger=provisioning.alerting t=2024-02-25T23:14:21.461413096Z level=info msg="starting to provision alerting" kafka | [2024-02-25 23:14:53,239] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- grafana | logger=provisioning.alerting t=2024-02-25T23:14:21.461438246Z level=info msg="finished to provision alerting" kafka | [2024-02-25 23:14:53,239] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | metrics.recording.level = INFO policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) grafana | logger=ngalert.state.manager t=2024-02-25T23:14:21.46164282Z level=info msg="Warming state cache for startup" kafka | [2024-02-25 23:14:53,239] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- grafana | logger=ngalert.state.manager t=2024-02-25T23:14:21.462309583Z level=info msg="State cache has been initialized" states=0 duration=666.032µs kafka | [2024-02-25 23:14:53,239] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | grafana | logger=ngalert.scheduler t=2024-02-25T23:14:21.462500926Z level=info msg="Starting scheduler" tickInterval=10s kafka | [2024-02-25 23:14:53,240] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | grafana | logger=ticker t=2024-02-25T23:14:21.462593298Z level=info msg=starting first_tick=2024-02-25T23:14:30Z kafka | [2024-02-25 23:14:53,247] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | partitioner.class = null policy-db-migrator | > upgrade 0130-statistics_sequence.sql grafana | logger=grafanaStorageLogger t=2024-02-25T23:14:21.463272331Z level=info msg="Storage starting" kafka | [2024-02-25 23:14:53,247] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | partitioner.ignore.keys = false policy-db-migrator | -------------- grafana | logger=ngalert.multiorg.alertmanager t=2024-02-25T23:14:21.463511616Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-02-25 23:14:53,247] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=http.server t=2024-02-25T23:14:21.466120335Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= policy-pap | receive.buffer.bytes = 32768 kafka | [2024-02-25 23:14:53,247] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=grafana-apiserver t=2024-02-25T23:14:21.480813203Z level=info msg="Authentication is disabled" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-02-25 23:14:53,248] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | grafana | logger=grafana-apiserver t=2024-02-25T23:14:21.484508873Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-02-25 23:14:53,258] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=sqlstore.transactions t=2024-02-25T23:14:21.582449155Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | request.timeout.ms = 30000 kafka | [2024-02-25 23:14:53,260] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=plugins.update.checker t=2024-02-25T23:14:21.608168871Z level=info msg="Update check succeeded" duration=144.635805ms policy-pap | retries = 2147483647 kafka | [2024-02-25 23:14:53,260] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=sqlstore.transactions t=2024-02-25T23:14:21.683321743Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" policy-pap | retry.backoff.ms = 100 kafka | [2024-02-25 23:14:53,260] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=sqlstore.transactions t=2024-02-25T23:14:21.695602576Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" policy-pap | sasl.client.callback.handler.class = null kafka | [2024-02-25 23:14:53,261] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=grafana.update.checker t=2024-02-25T23:14:21.888582945Z level=info msg="Update check succeeded" duration=423.923028ms policy-pap | sasl.jaas.config = null kafka | [2024-02-25 23:14:53,269] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=infra.usagestats t=2024-02-25T23:15:12.47581418Z level=info msg="Usage stats are ready to report" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-02-25 23:14:53,269] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-25 23:14:53,269] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.kerberos.service.name = null kafka | [2024-02-25 23:14:53,270] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-02-25 23:14:53,270] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-25 23:14:53,279] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-25 23:14:53,279] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-pap | sasl.login.class = null kafka | [2024-02-25 23:14:53,279] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-25 23:14:53,279] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-25 23:14:53,279] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-25 23:14:53,286] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE pdpstatistics policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-02-25 23:14:53,287] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-25 23:14:53,287] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-25 23:14:53,287] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:53,287] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:53,295] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI kafka | [2024-02-25 23:14:53,295] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-02-25 23:14:53,295] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-02-25 23:14:53,295] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-02-25 23:14:53,295] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-02-25 23:14:53,302] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-02-25 23:14:53,303] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-02-25 23:14:53,303] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) policy-db-migrator | DROP TABLE statistics_sequence policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-02-25 23:14:53,303] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-02-25 23:14:53,303] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(19qiw_gSQSuGAZ9hqdP69g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | policyadmin: OK: upgrade (1300) policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-db-migrator | name version policy-pap | security.protocol = PLAINTEXT kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | policyadmin 1300 policy-pap | security.providers = null kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-db-migrator | ID script operation from_version to_version tag success atTime policy-pap | send.buffer.bytes = 131072 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:19 policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:19 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:19 policy-pap | ssl.cipher.suites = null kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 policy-pap | ssl.engine.factory.class = null kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 policy-pap | ssl.key.password = null kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,313] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:20 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | transactional.id = null policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | [2024-02-25T23:14:51.879+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708902891883 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=e8b20985-7a16-4249-8f92-c0d245467f15, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | [2024-02-25T23:14:51.883+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | [2024-02-25T23:14:51.887+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-02-25T23:14:51.887+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-02-25T23:14:51.889+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | [2024-02-25T23:14:51.890+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | [2024-02-25T23:14:51.891+00:00|INFO|ServiceManager|main] Policy PAP started kafka | [2024-02-25 23:14:53,314] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | [2024-02-25T23:14:51.892+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.355 seconds (process running for 12.105) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-02-25T23:14:51.894+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-02-25T23:14:52.363+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-02-25T23:14:52.364+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 policy-pap | [2024-02-25T23:14:52.365+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:21 policy-pap | [2024-02-25T23:14:52.365+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ kafka | [2024-02-25 23:14:53,315] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.406+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-25 23:14:53,325] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.406+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: EgVdN6KHQUyZtQ3qnQB0kQ kafka | [2024-02-25 23:14:53,326] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.468+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.481+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.483+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.557+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.617+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 policy-pap | [2024-02-25T23:14:52.664+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:52.724+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:52.770+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:52.831+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:52.881+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:52.937+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:52.990+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.045+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:53.096+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.151+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-02-25T23:14:53.203+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:53.262+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.308+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:53.379+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.389+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] (Re-)joining group policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:53.417+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:22 kafka | [2024-02-25 23:14:53,329] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.418+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Request joining group due to: need to re-join with the given member-id: consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:53.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.419+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] (Re-)joining group policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:53.420+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:53.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:53.433+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:56.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Successfully joined group with generation Generation{generationId=1, memberId='consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284', protocol='range'} policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:56.459+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a', protocol='range'} policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:56.466+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:56.466+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Finished assignment for group at generation 1: {consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:56.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a', protocol='range'} policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:56.500+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Successfully synced group in generation Generation{generationId=1, memberId='consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284', protocol='range'} policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 policy-pap | [2024-02-25T23:14:56.501+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 policy-pap | [2024-02-25T23:14:56.501+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 policy-pap | [2024-02-25T23:14:56.506+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 policy-pap | [2024-02-25T23:14:56.506+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 policy-pap | [2024-02-25T23:14:56.527+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 policy-pap | [2024-02-25T23:14:56.527+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:14:56.545+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-bd340acf-32e5-46ed-9341-bc882164db21-3, groupId=bd340acf-32e5-46ed-9341-bc882164db21] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:23 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:14:56.545+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:01.611+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:01.611+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:01.613+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 2 ms policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.756+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [] policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.757+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.757+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2502242314190800u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"2f1dcd45-4683-45cf-9d92-dddeb169e9b3","timestampMs":1708902913716,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.769+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.844+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.844+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting listener policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.845+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting timer policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,330] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.845+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.847+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:24 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.847+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting enqueue policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.848+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate started policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.848+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.889+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2502242314190900u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.890+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.891+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"8017ad77-05f8-444a-aa06-a451f278f050","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.891+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.910+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.912+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 policy-pap | [2024-02-25T23:15:13.918+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2502242314191000u 1 2024-02-25 23:14:25 policy-pap | [2024-02-25T23:15:13.919+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2502242314191100u 1 2024-02-25 23:14:25 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"4057075b-fac2-492c-a7f4-7d5372a2ee8d","timestampMs":1708902913898,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup"} kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 policy-pap | [2024-02-25T23:15:13.919+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping enqueue kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping timer kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2502242314191200u 1 2024-02-25 23:14:25 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2502242314191300u 1 2024-02-25 23:14:26 policy-pap | [2024-02-25T23:15:13.920+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping listener kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2502242314191300u 1 2024-02-25 23:14:26 policy-pap | [2024-02-25T23:15:13.921+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopped kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2502242314191300u 1 2024-02-25 23:14:26 policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate successful kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | policyadmin: OK @ 1300 policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c start publishing next request kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting listener kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.927+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting timer kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:53,331] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] kafka | [2024-02-25 23:14:53,331] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange starting enqueue kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange started kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.928+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.929+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.963+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.963+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.967+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,332] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-pap | [2024-02-25T23:15:13.985+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping kafka | [2024-02-25 23:14:53,332] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping enqueue kafka | [2024-02-25 23:14:53,334] INFO [Broker id=1] Finished LeaderAndIsr request in 673ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | [2024-02-25T23:15:13.986+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping timer kafka | [2024-02-25 23:14:53,338] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.986+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopping listener kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange stopped kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpStateChange successful kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c start publishing next request kafka | [2024-02-25 23:14:53,339] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.987+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting listener kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting timer kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=a77fa683-80f4-4771-a123-a237db6bdd66, expireMs=1708902943988] kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.988+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate starting enqueue kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.989+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate started policy-pap | [2024-02-25T23:15:13.989+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,340] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=19qiw_gSQSuGAZ9hqdP69g, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=9kyEG5R7S_ymSJoFuQGdeg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | [2024-02-25T23:15:13.992+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"8017ad77-05f8-444a-aa06-a451f278f050","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"77d02f35-10cd-4f51-b9ca-9c8af9b90048","timestampMs":1708902913900,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.992+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 8017ad77-05f8-444a-aa06-a451f278f050 policy-pap | [2024-02-25T23:15:13.995+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","timestampMs":1708902913830,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,340] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.996+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.996+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cfeeaf9a-8c54-4457-9343-75107d5ce4da","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"00c40a54-df00-48d3-a9d7-3e82bceb0900","timestampMs":1708902913940,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:13.996+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cfeeaf9a-8c54-4457-9343-75107d5ce4da kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.001+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.001+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-02-25 23:14:53,341] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.003+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"source":"pap-b576e5f8-f5c3-4cd4-b7a9-ba9546dfcb5d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"a77fa683-80f4-4771-a123-a237db6bdd66","timestampMs":1708902913954,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.004+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.008+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.009+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a77fa683-80f4-4771-a123-a237db6bdd66 kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.011+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a77fa683-80f4-4771-a123-a237db6bdd66","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"55953c4d-82bc-4c85-8ce1-d8e5f2afa2ca","timestampMs":1708902914002,"name":"apex-f8f852ea-ec99-457c-8abb-a88c72ec947c","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping enqueue kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping timer kafka | [2024-02-25 23:14:53,342] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a77fa683-80f4-4771-a123-a237db6bdd66, expireMs=1708902943988] kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopping listener kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.013+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate stopped kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.019+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c PdpUpdate successful kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:14.019+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-f8f852ea-ec99-457c-8abb-a88c72ec947c has no more requests kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:22.311+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:22.318+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:22.762+00:00|INFO|SessionData|http-nio-6969-exec-8] unknown group testGroup kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:23.380+00:00|INFO|SessionData|http-nio-6969-exec-8] create cached group testGroup kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:23.380+00:00|INFO|SessionData|http-nio-6969-exec-8] creating DB group testGroup kafka | [2024-02-25 23:14:53,343] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:23.937+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:24.219+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:24.328+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:24.328+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:24.329+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:24.344+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-25T23:15:24Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-25T23:15:24Z, user=policyadmin)] kafka | [2024-02-25 23:14:53,344] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) policy-pap | [2024-02-25T23:15:25.068+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.069+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.069+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.069+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.070+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.085+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-25T23:15:25Z, user=policyadmin)] kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.473+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.473+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.474+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:25.485+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-25T23:15:25Z, user=policyadmin)] kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:43.845+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=8017ad77-05f8-444a-aa06-a451f278f050, expireMs=1708902943845] kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:43.929+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=cfeeaf9a-8c54-4457-9343-75107d5ce4da, expireMs=1708902943928] kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:46.100+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:15:46.102+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) policy-pap | [2024-02-25T23:16:51.891+00:00|INFO|PdpModifyRequestMap|pool-3-thread-1] check for PDP records older than 360000ms kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,348] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,349] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,350] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-25 23:14:53,350] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-25 23:14:53,412] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group bd340acf-32e5-46ed-9341-bc882164db21 in Empty state. Created a new member id consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:53,424] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:53,431] INFO [GroupCoordinator 1]: Preparing to rebalance group bd340acf-32e5-46ed-9341-bc882164db21 in state PreparingRebalance with old generation 0 (__consumer_offsets-9) (reason: Adding new member consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:53,435] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:54,107] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b53cde7a-481f-427a-882b-d5bcee52ac2a in Empty state. Created a new member id consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:54,111] INFO [GroupCoordinator 1]: Preparing to rebalance group b53cde7a-481f-427a-882b-d5bcee52ac2a in state PreparingRebalance with old generation 0 (__consumer_offsets-47) (reason: Adding new member consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:56,449] INFO [GroupCoordinator 1]: Stabilized group bd340acf-32e5-46ed-9341-bc882164db21 generation 1 (__consumer_offsets-9) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:56,457] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:56,477] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-6c46eff2-b5c2-42c2-9cce-592a12f2118a for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:56,477] INFO [GroupCoordinator 1]: Assignment received from leader consumer-bd340acf-32e5-46ed-9341-bc882164db21-3-911cce17-68b0-464d-9d54-73b188d5a284 for group bd340acf-32e5-46ed-9341-bc882164db21 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:57,115] INFO [GroupCoordinator 1]: Stabilized group b53cde7a-481f-427a-882b-d5bcee52ac2a generation 1 (__consumer_offsets-47) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-25 23:14:57,135] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b53cde7a-481f-427a-882b-d5bcee52ac2a-2-9cf0b086-c9c6-4375-b1e8-ff62debeccd7 for group b53cde7a-481f-427a-882b-d5bcee52ac2a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping policy-api ... Stopping grafana ... Stopping simulator ... Stopping compose_zookeeper_1 ... Stopping mariadb ... Stopping prometheus ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing grafana ... Removing simulator ... Removing compose_zookeeper_1 ... Removing mariadb ... Removing prometheus ... Removing policy-api ... done Removing simulator ... done Removing policy-pap ... done Removing policy-apex-pdp ... done Removing policy-db-migrator ... done Removing compose_zookeeper_1 ... done Removing grafana ... done Removing kafka ... done Removing prometheus ... done Removing mariadb ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.Nh0lglCdc7 ]] + rsync -av /tmp/tmp.Nh0lglCdc7/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 918,975 bytes received 95 bytes 1,838,140.00 bytes/sec total size is 918,429 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2142 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3777363861329545104.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1488974624590515388.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1001023641922184474.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2194008244325888421.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config10248940132880237207tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11184646282004448228.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13091437881155059762.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9896728310841662378.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6869981220830927508.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins13638010387743333355.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-NbUn from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-NbUn/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1591 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-8694 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 859 25101 0 6206 30852 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:a4:a0:2f brd ff:ff:ff:ff:ff:ff inet 10.30.107.118/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85930sec preferred_lft 85930sec inet6 fe80::f816:3eff:fea4:a02f/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b4:d7:e4:b6 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-8694) 02/25/24 _x86_64_ (8 CPU) 23:10:24 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:01 114.85 36.13 78.72 1687.72 26761.41 23:13:01 126.40 23.20 103.20 2793.40 31793.10 23:14:01 212.07 0.17 211.90 15.42 122584.68 23:15:01 339.96 12.31 327.65 794.60 55741.94 23:16:01 19.43 0.00 19.43 0.00 19913.01 23:17:01 24.55 0.07 24.48 8.53 21138.71 23:18:01 68.18 1.95 66.23 112.10 21792.34 Average: 129.38 10.54 118.83 772.81 42847.75 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 30082168 31671396 2857052 8.67 69544 1831112 1456968 4.29 900916 1667092 155556 23:13:01 28917568 31664232 4021652 12.21 98960 2922284 1570140 4.62 991272 2662396 908888 23:14:01 25795500 31669464 7143720 21.69 140100 5861668 1457548 4.29 1018912 5599124 807752 23:15:01 23327516 29367544 9611704 29.18 156564 5992252 9091868 26.75 3499656 5506116 1660 23:16:01 23299336 29340064 9639884 29.27 156760 5992536 9101264 26.78 3529692 5503508 296 23:17:01 23332848 29400548 9606372 29.16 157124 6020724 8311384 24.45 3486740 5517740 396 23:18:01 25729972 31618020 7209248 21.89 160444 5853436 1615888 4.75 1300040 5365320 54952 Average: 25783558 30675895 7155662 21.72 134214 4924859 4657866 13.70 2103890 4545899 275643 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 73.70 53.14 943.46 21.20 0.00 0.00 0.00 0.00 23:12:01 lo 1.60 1.60 0.17 0.17 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 br-312cfb88b3b8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 194.07 137.13 5338.11 14.42 0.00 0.00 0.00 0.00 23:13:01 lo 7.00 7.00 0.65 0.65 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 br-312cfb88b3b8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 ens3 1010.10 560.31 26797.61 42.41 0.00 0.00 0.00 0.00 23:14:01 lo 6.25 6.25 0.63 0.63 0.00 0.00 0.00 0.00 23:15:01 veth476d79e 0.55 0.83 0.06 0.31 0.00 0.00 0.00 0.00 23:15:01 vetha208a11 1.70 1.90 0.34 0.18 0.00 0.00 0.00 0.00 23:15:01 veth0739b17 5.03 6.43 0.81 0.92 0.00 0.00 0.00 0.00 23:15:01 veth79bdb89 0.00 0.38 0.00 0.02 0.00 0.00 0.00 0.00 23:16:01 veth476d79e 0.25 0.20 0.02 0.01 0.00 0.00 0.00 0.00 23:16:01 vetha208a11 3.82 5.35 0.79 0.48 0.00 0.00 0.00 0.00 23:16:01 veth0739b17 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:16:01 veth79bdb89 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 vetha208a11 3.12 4.58 0.47 0.35 0.00 0.00 0.00 0.00 23:17:01 veth0739b17 0.17 0.37 0.01 0.03 0.00 0.00 0.00 0.00 23:17:01 veth79bdb89 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 veth690e18f 53.97 47.94 21.02 40.48 0.00 0.00 0.00 0.00 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 ens3 1629.82 1001.52 33994.85 174.98 0.00 0.00 0.00 0.00 23:18:01 lo 35.00 35.00 6.20 6.20 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 195.52 116.55 4750.53 17.70 0.00 0.00 0.00 0.00 Average: lo 4.44 4.44 0.84 0.84 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-8694) 02/25/24 _x86_64_ (8 CPU) 23:10:24 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 9.75 0.00 0.79 2.51 0.03 86.92 23:12:01 0 0.83 0.00 0.28 0.07 0.02 98.80 23:12:01 1 1.46 0.00 0.27 0.70 0.02 97.56 23:12:01 2 0.83 0.00 0.47 0.22 0.00 98.48 23:12:01 3 13.77 0.00 0.73 1.17 0.07 84.26 23:12:01 4 31.92 0.00 1.82 1.25 0.03 64.98 23:12:01 5 25.11 0.00 2.07 1.88 0.03 70.91 23:12:01 6 0.99 0.00 0.42 14.78 0.03 83.78 23:12:01 7 3.04 0.00 0.28 0.02 0.02 96.64 23:13:01 all 10.85 0.00 1.93 2.39 0.04 84.80 23:13:01 0 28.77 0.00 3.21 2.16 0.05 65.81 23:13:01 1 12.42 0.00 2.05 0.44 0.03 85.06 23:13:01 2 6.70 0.00 1.54 0.05 0.03 91.67 23:13:01 3 2.54 0.00 0.94 1.32 0.02 95.19 23:13:01 4 12.21 0.00 2.19 1.21 0.03 84.36 23:13:01 5 16.67 0.00 1.99 1.24 0.07 80.03 23:13:01 6 3.23 0.00 1.59 10.87 0.03 84.28 23:13:01 7 4.30 0.00 1.92 1.77 0.07 91.93 23:14:01 all 11.53 0.00 5.20 8.07 0.06 75.15 23:14:01 0 10.65 0.00 5.59 0.96 0.05 82.75 23:14:01 1 14.30 0.00 5.48 0.17 0.07 79.98 23:14:01 2 13.06 0.00 5.11 0.10 0.07 81.66 23:14:01 3 11.45 0.00 4.94 8.56 0.07 74.99 23:14:01 4 11.90 0.00 7.11 19.30 0.07 61.62 23:14:01 5 10.67 0.00 4.75 16.14 0.07 68.37 23:14:01 6 9.42 0.00 4.14 18.39 0.05 68.00 23:14:01 7 10.72 0.00 4.52 1.10 0.05 83.61 23:15:01 all 28.84 0.00 4.13 4.15 0.08 62.80 23:15:01 0 26.96 0.00 4.02 1.13 0.08 67.82 23:15:01 1 18.99 0.00 3.40 2.07 0.07 75.47 23:15:01 2 31.81 0.00 4.51 3.30 0.07 60.32 23:15:01 3 36.29 0.00 4.42 0.49 0.07 58.74 23:15:01 4 33.11 0.00 4.10 0.84 0.07 61.88 23:15:01 5 28.24 0.00 4.03 1.69 0.08 65.96 23:15:01 6 32.05 0.00 4.53 16.63 0.10 46.68 23:15:01 7 23.37 0.00 3.95 7.11 0.07 65.50 23:16:01 all 5.08 0.00 0.51 1.19 0.06 93.17 23:16:01 0 3.99 0.00 0.47 0.00 0.07 95.48 23:16:01 1 5.04 0.00 0.42 0.02 0.05 94.47 23:16:01 2 4.86 0.00 0.60 0.03 0.07 94.44 23:16:01 3 4.84 0.00 0.45 0.08 0.08 94.54 23:16:01 4 6.97 0.00 0.75 0.03 0.07 92.17 23:16:01 5 5.21 0.00 0.48 0.00 0.05 94.25 23:16:01 6 4.84 0.00 0.48 0.00 0.03 94.64 23:16:01 7 4.89 0.00 0.42 9.34 0.07 85.29 23:17:01 all 1.39 0.00 0.33 1.26 0.05 96.97 23:17:01 0 1.65 0.00 0.37 0.08 0.05 97.85 23:17:01 1 1.14 0.00 0.35 0.00 0.05 98.46 23:17:01 2 1.50 0.00 0.35 0.48 0.02 97.64 23:17:01 3 1.00 0.00 0.35 0.03 0.07 98.55 23:17:01 4 1.39 0.00 0.32 0.02 0.07 98.21 23:17:01 5 1.97 0.00 0.27 0.08 0.03 97.65 23:17:01 6 1.33 0.00 0.32 0.02 0.03 98.30 23:17:01 7 1.10 0.00 0.37 9.33 0.08 89.12 23:18:01 all 6.95 0.00 0.74 1.61 0.04 90.66 23:18:01 0 2.55 0.00 0.58 0.28 0.02 96.57 23:18:01 1 2.74 0.00 0.62 1.19 0.03 95.41 23:18:01 2 0.70 0.00 0.57 0.28 0.03 98.41 23:18:01 3 5.61 0.00 0.73 0.27 0.03 93.36 23:18:01 4 3.19 0.00 0.77 0.10 0.02 95.93 23:18:01 5 0.88 0.00 0.47 0.10 0.02 98.53 23:18:01 6 37.10 0.00 1.50 0.87 0.05 60.48 23:18:01 7 2.84 0.00 0.65 9.78 0.05 86.67 Average: all 10.61 0.00 1.94 3.01 0.05 84.39 Average: 0 10.75 0.00 2.07 0.67 0.05 86.47 Average: 1 8.01 0.00 1.80 0.66 0.05 89.49 Average: 2 8.47 0.00 1.87 0.64 0.04 88.99 Average: 3 10.76 0.00 1.79 1.69 0.06 85.71 Average: 4 14.37 0.00 2.43 3.22 0.05 79.93 Average: 5 12.66 0.00 2.00 2.99 0.05 82.30 Average: 6 12.69 0.00 1.85 8.76 0.05 76.65 Average: 7 7.16 0.00 1.72 5.50 0.06 85.56