Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-9933 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-nkGmbJUOXrj8/agent.2077 SSH_AGENT_PID=2079 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5468798684531083163.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_5468798684531083163.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 5582cd406c8414919c4d5d7f5b116f4f1e5a971d (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=30 Commit message: "Merge "Add ACM regression test suite"" > git rev-list --no-walk 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins990288347304115897.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ohSB lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH Generating Requirements File ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. Python 3.10.6 pip 24.0 from /tmp/venv-ohSB/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.2 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.53 botocore==1.34.53 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.1 future==1.0.0 gitdb==4.0.11 GitPython==3.1.42 google-auth==2.28.1 httplib2==0.22.0 identify==2.5.35 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.9 lxml==5.1.0 MarkupSafe==2.1.5 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.0 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==23.2 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.10.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.2.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.1 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.33.0 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.10.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.25.1 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins6917225224165365458.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins6422848241316460569.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.W4wZtNCRb3 ++ echo ROBOT_VENV=/tmp/tmp.W4wZtNCRb3 +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.W4wZtNCRb3 ++ source /tmp/tmp.W4wZtNCRb3/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.W4wZtNCRb3 +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.W4wZtNCRb3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.W4wZtNCRb3) ' '!=' x ']' +++ PS1='(tmp.W4wZtNCRb3) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.W4wZtNCRb3/src/onap ++ rm -rf /tmp/tmp.W4wZtNCRb3/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.W4wZtNCRb3/bin/activate + '[' -z /tmp/tmp.W4wZtNCRb3/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.W4wZtNCRb3/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.W4wZtNCRb3 ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.W4wZtNCRb3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.W4wZtNCRb3) ' ++ '[' 'x(tmp.W4wZtNCRb3) ' '!=' x ']' ++ PS1='(tmp.W4wZtNCRb3) (tmp.W4wZtNCRb3) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.yQgLqrqYzc + cd /tmp/tmp.yQgLqrqYzc + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:ed573692302e5a28aa3b51a60adbd7641290e273719edd44bc9ff784d1569efa Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:71cc3c3555fddbd324c5ddec27e24db340b82732d2f6ce50eddcfdf6715a7ab2 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating compose_zookeeper_1 ... Creating prometheus ... Creating mariadb ... Creating simulator ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating mariadb ... done Creating policy-db-migrator ... Creating simulator ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds policy-api Up 13 seconds kafka Up 12 seconds grafana Up 18 seconds simulator Up 16 seconds mariadb Up 17 seconds compose_zookeeper_1 Up 15 seconds prometheus Up 19 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 18 seconds kafka Up 17 seconds grafana Up 23 seconds simulator Up 21 seconds mariadb Up 22 seconds compose_zookeeper_1 Up 20 seconds prometheus Up 24 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 23 seconds kafka Up 22 seconds grafana Up 28 seconds simulator Up 26 seconds mariadb Up 27 seconds compose_zookeeper_1 Up 25 seconds prometheus Up 29 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 28 seconds kafka Up 27 seconds grafana Up 33 seconds simulator Up 31 seconds mariadb Up 32 seconds compose_zookeeper_1 Up 30 seconds prometheus Up 34 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 33 seconds kafka Up 32 seconds grafana Up 38 seconds simulator Up 36 seconds mariadb Up 37 seconds compose_zookeeper_1 Up 35 seconds prometheus Up 39 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 38 seconds kafka Up 37 seconds grafana Up 43 seconds simulator Up 41 seconds mariadb Up 42 seconds compose_zookeeper_1 Up 40 seconds prometheus Up 44 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:14:46 up 4 min, 0 users, load average: 3.20, 1.44, 0.58 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.1 us, 3.0 sy, 0.0 ni, 78.6 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds policy-api Up 38 seconds kafka Up 37 seconds grafana Up 44 seconds simulator Up 41 seconds mariadb Up 42 seconds compose_zookeeper_1 Up 40 seconds prometheus Up 44 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS e417f0e35287 policy-apex-pdp 2.10% 185.6MiB / 31.41GiB 0.58% 7.21kB / 6.97kB 0B / 0B 48 120b5bfa683b policy-pap 6.74% 528.6MiB / 31.41GiB 1.64% 30.5kB / 32.8kB 0B / 153MB 61 d898c735c037 policy-api 0.11% 539.8MiB / 31.41GiB 1.68% 1MB / 737kB 0B / 0B 56 c7bee733818e kafka 8.50% 378.4MiB / 31.41GiB 1.18% 72.1kB / 74.7kB 0B / 508kB 84 861ec82cea04 grafana 0.03% 57.93MiB / 31.41GiB 0.18% 18.9kB / 3.44kB 0B / 24MB 21 1aa1a47f47a8 simulator 0.07% 125.3MiB / 31.41GiB 0.39% 1.31kB / 0B 0B / 0B 76 99656c7b467a mariadb 0.02% 101.7MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 71.4MB 37 87f646d9b0d2 compose_zookeeper_1 0.18% 98.32MiB / 31.41GiB 0.31% 55.8kB / 49.4kB 0B / 332kB 60 7d01a6da3020 prometheus 0.00% 19.42MiB / 31.41GiB 0.06% 28.6kB / 1.09kB 131kB / 0B 12 + echo + cd /tmp/tmp.yQgLqrqYzc + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.yQgLqrqYzc/output.xml Log: /tmp/tmp.yQgLqrqYzc/log.html Report: /tmp/tmp.yQgLqrqYzc/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:16:36 up 6 min, 0 users, load average: 0.80, 1.16, 0.57 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.2 us, 2.3 sy, 0.0 ni, 83.2 id, 3.2 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.2G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes kafka Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes mariadb Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS e417f0e35287 policy-apex-pdp 1.87% 190.1MiB / 31.41GiB 0.59% 56.7kB / 91.4kB 0B / 0B 52 120b5bfa683b policy-pap 0.73% 499.5MiB / 31.41GiB 1.55% 2.33MB / 774kB 0B / 153MB 65 d898c735c037 policy-api 0.11% 615.3MiB / 31.41GiB 1.91% 2.49MB / 1.26MB 0B / 0B 58 c7bee733818e kafka 7.73% 387.1MiB / 31.41GiB 1.20% 242kB / 217kB 0B / 606kB 85 861ec82cea04 grafana 0.08% 65.02MiB / 31.41GiB 0.20% 19.6kB / 4.39kB 0B / 24MB 21 1aa1a47f47a8 simulator 0.07% 125.4MiB / 31.41GiB 0.39% 1.58kB / 0B 0B / 0B 78 99656c7b467a mariadb 0.01% 103.1MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 71.7MB 28 87f646d9b0d2 compose_zookeeper_1 0.10% 99.64MiB / 31.41GiB 0.31% 58.7kB / 51kB 0B / 332kB 60 7d01a6da3020 prometheus 0.00% 25.52MiB / 31.41GiB 0.08% 139kB / 10.2kB 131kB / 0B 13 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, policy-api, kafka, policy-db-migrator, grafana, simulator, mariadb, compose_zookeeper_1, prometheus zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-02-29 23:14:09,361] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,368] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,368] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,368] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,368] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,369] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-29 23:14:09,370] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-29 23:14:09,370] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-02-29 23:14:09,370] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-02-29 23:14:09,371] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-02-29 23:14:09,371] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,372] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,372] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,372] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,372] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-02-29 23:14:09,372] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-02-29 23:14:09,383] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-02-29 23:14:09,386] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-29 23:14:09,386] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-02-29 23:14:09,388] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-29 23:14:09,398] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,398] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,399] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:host.name=87f646d9b0d2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,400] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,401] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-02-29 23:14:09,402] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,402] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,403] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-29 23:14:09,403] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-29 23:14:09,404] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-02-29 23:14:09,406] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,406] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,407] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-29 23:14:09,407] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-02-29 23:14:09,407] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,427] INFO Logging initialized @563ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-02-29 23:14:09,516] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-29 23:14:09,516] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-29 23:14:09,536] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-29 23:14:09,564] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-29 23:14:09,564] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-29 23:14:09,565] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-02-29 23:14:09,568] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-02-29 23:14:09,576] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-02-29 23:14:09,594] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-02-29 23:14:09,595] INFO Started @731ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-02-29 23:14:09,595] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-02-29 23:14:09,605] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-29 23:14:09,607] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-02-29 23:14:09,610] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-29 23:14:09,613] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-02-29 23:14:09,631] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-29 23:14:09,632] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-02-29 23:14:09,634] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-29 23:14:09,634] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-29 23:14:09,642] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-02-29 23:14:09,642] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-29 23:14:09,646] INFO Snapshot loaded in 11 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-02-29 23:14:09,646] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-02-29 23:14:09,647] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-02-29 23:14:09,655] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-02-29 23:14:09,655] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-02-29 23:14:09,669] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-02-29 23:14:09,670] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-02-29 23:14:13,222] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) grafana | logger=settings t=2024-02-29T23:14:02.62379877Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-29T23:14:02Z grafana | logger=settings t=2024-02-29T23:14:02.624097953Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-02-29T23:14:02.624109003Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-02-29T23:14:02.624113633Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-02-29T23:14:02.624119603Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-02-29T23:14:02.624123233Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-29T23:14:02.624126383Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-29T23:14:02.624163653Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-02-29T23:14:02.624173113Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-02-29T23:14:02.624177893Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-02-29T23:14:02.624183073Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-02-29T23:14:02.624187093Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-02-29T23:14:02.624190763Z level=info msg=Target target=[all] grafana | logger=settings t=2024-02-29T23:14:02.624202783Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-02-29T23:14:02.624206193Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-02-29T23:14:02.624229734Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-02-29T23:14:02.624242364Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-02-29T23:14:02.624266264Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-02-29T23:14:02.624275244Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-02-29T23:14:02.624656767Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-02-29T23:14:02.624684917Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-02-29T23:14:02.625445434Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-02-29T23:14:02.626441102Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-02-29T23:14:02.627263689Z level=info msg="Migration successfully executed" id="create migration_log table" duration=821.867µs grafana | logger=migrator t=2024-02-29T23:14:02.633040457Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-02-29T23:14:02.633570041Z level=info msg="Migration successfully executed" id="create user table" duration=529.424µs grafana | logger=migrator t=2024-02-29T23:14:02.640449028Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-02-29T23:14:02.642132722Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.682874ms grafana | logger=migrator t=2024-02-29T23:14:02.647250514Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-02-29T23:14:02.648586275Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.334571ms grafana | logger=migrator t=2024-02-29T23:14:02.65279222Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-02-29T23:14:02.653536426Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=743.956µs grafana | logger=migrator t=2024-02-29T23:14:02.659910409Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-02-29T23:14:02.661076979Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.16644ms mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-02-29 23:14:03+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-02-29 23:14:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-29 23:14:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-29 23:14:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-02-29 23:14:05+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-02-29 23:14:05+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-02-29 23:14:05+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-02-29 23:14:05 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-02-29 23:14:05 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-02-29 23:14:05 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-02-29 23:14:05 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-02-29 23:14:05 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-02-29 23:14:06 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-02-29 23:14:06 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-02-29 23:14:06 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-02-29 23:14:06 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-29 23:14:06 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-02-29 23:14:06 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-02-29 23:14:06+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-02-29 23:14:08+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-02-29 23:14:08+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-02-29 23:14:08+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-02-29 23:14:08+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh grafana | logger=migrator t=2024-02-29T23:14:02.664612728Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" policy-apex-pdp | Waiting for mariadb port 3306... mariadb | #!/bin/bash -xv grafana | logger=migrator t=2024-02-29T23:14:02.669536109Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.922341ms policy-apex-pdp | mariadb (172.17.0.3:3306) open kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) grafana | logger=migrator t=2024-02-29T23:14:02.673672613Z level=info msg="Executing migration" id="create user table v2" policy-apex-pdp | Waiting for kafka port 9092... policy-api | Waiting for mariadb port 3306... policy-db-migrator | Waiting for mariadb port 3306... mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved kafka | ===> Configuring ... grafana | logger=migrator t=2024-02-29T23:14:02.67452114Z level=info msg="Migration successfully executed" id="create user table v2" duration=847.747µs policy-apex-pdp | kafka (172.17.0.8:9092) open prometheus | ts=2024-02-29T23:14:01.584Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d policy-api | mariadb (172.17.0.3:3306) open policy-pap | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # grafana | logger=migrator t=2024-02-29T23:14:02.680347308Z level=info msg="Executing migration" id="create index UQE_user_login - v2" policy-apex-pdp | Waiting for pap port 6969... prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" policy-api | Waiting for policy-db-migrator port 6824... policy-pap | mariadb (172.17.0.3:3306) open policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json kafka | Running in Zookeeper mode... mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); grafana | logger=migrator t=2024-02-29T23:14:02.681136415Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=791.027µs policy-apex-pdp | pap (172.17.0.10:6969) open prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" policy-api | policy-db-migrator (172.17.0.7:6824) open policy-pap | Waiting for kafka port 9092... policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | overriding logback.xml kafka | ===> Running preflight checks ... mariadb | # you may not use this file except in compliance with the License. grafana | logger=migrator t=2024-02-29T23:14:02.684655684Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-02-29T23:14:02.685444551Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=788.737µs prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-pap | kafka (172.17.0.8:9092) open policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused simulator | 2024-02-29 23:14:05,300 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json kafka | ===> Check if /var/lib/kafka/data is writable ... mariadb | # You may obtain a copy of the License at grafana | logger=migrator t=2024-02-29T23:14:02.688728308Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-02-29T23:14:02.689181731Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=453.173µs prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" policy-api | policy-pap | Waiting for api port 6969... policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! simulator | 2024-02-29 23:14:05,364 INFO org.onap.policy.models.simulators starting kafka | ===> Check if Zookeeper is healthy ... mariadb | # grafana | logger=migrator t=2024-02-29T23:14:02.695540134Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-02-29T23:14:02.696543992Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.001828ms prometheus | ts=2024-02-29T23:14:01.585Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" policy-api | . ____ _ __ _ _ policy-pap | api (172.17.0.9:6969) open policy-db-migrator | 321 blocks simulator | 2024-02-29 23:14:05,364 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties kafka | SLF4J: Class path contains multiple SLF4J bindings. mariadb | # http://www.apache.org/licenses/LICENSE-2.0 grafana | logger=migrator t=2024-02-29T23:14:02.700678657Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-02-29T23:14:02.702702484Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.987876ms prometheus | ts=2024-02-29T23:14:01.587Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-db-migrator | Preparing upgrade release version: 0800 simulator | 2024-02-29 23:14:05,602 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] mariadb | # grafana | logger=migrator t=2024-02-29T23:14:02.706477155Z level=info msg="Executing migration" id="Update user table charset" policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' prometheus | ts=2024-02-29T23:14:01.588Z caller=main.go:1118 level=info msg="Starting TSDB ..." policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ simulator | 2024-02-29 23:14:05,603 INFO org.onap.policy.models.simulators starting A&AI simulator kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] mariadb | # Unless required by applicable law or agreed to in writing, software grafana | logger=migrator t=2024-02-29T23:14:02.706499095Z level=info msg="Migration successfully executed" id="Update user table charset" duration=22.42µs grafana | logger=migrator t=2024-02-29T23:14:02.709613601Z level=info msg="Executing migration" id="Add last_seen_at column to user" prometheus | ts=2024-02-29T23:14:01.594Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | Preparing upgrade release version: 0900 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) simulator | 2024-02-29 23:14:05,765 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. mariadb | # distributed under the License is distributed on an "AS IS" BASIS, policy-apex-pdp | [2024-02-29T23:14:44.788+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] grafana | logger=migrator t=2024-02-29T23:14:02.710436458Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=822.887µs prometheus | ts=2024-02-29T23:14:01.594Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 policy-pap | policy-db-migrator | Preparing upgrade release version: 1000 simulator | 2024-02-29 23:14:05,776 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. policy-apex-pdp | [2024-02-29T23:14:44.999+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-02-29T23:14:02.716299226Z level=info msg="Executing migration" id="Add missing user data" prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" policy-pap | . ____ _ __ _ _ policy-db-migrator | Preparing upgrade release version: 1100 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / simulator | 2024-02-29 23:14:05,778 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-02-29 23:14:13,156] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) mariadb | # See the License for the specific language governing permissions and policy-apex-pdp | allow.auto.create.topics = true grafana | logger=migrator t=2024-02-29T23:14:02.71676875Z level=info msg="Migration successfully executed" id="Add missing user data" duration=468.944µs prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.03µs policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-db-migrator | Preparing upgrade release version: 1200 policy-api | =========|_|==============|___/=/_/_/_/ simulator | 2024-02-29 23:14:05,784 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 kafka | [2024-02-29 23:14:13,157] INFO Client environment:host.name=c7bee733818e (org.apache.zookeeper.ZooKeeper) mariadb | # limitations under the License. policy-apex-pdp | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-02-29T23:14:02.720799543Z level=info msg="Executing migration" id="Add is_disabled column to user" prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-db-migrator | Preparing upgrade release version: 1300 policy-api | :: Spring Boot :: (v3.1.8) simulator | 2024-02-29 23:14:05,848 INFO Session workerName=node0 kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) mariadb | policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) grafana | logger=migrator t=2024-02-29T23:14:02.723530196Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.730103ms policy-db-migrator | Done policy-api | simulator | 2024-02-29 23:14:06,410 INFO Using GSON for REST calls kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / grafana | logger=migrator t=2024-02-29T23:14:02.72768539Z level=info msg="Executing migration" id="Add index user.login/user.email" policy-db-migrator | name version policy-api | [2024-02-29T23:14:19.017+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) simulator | 2024-02-29 23:14:06,497 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) mariadb | do policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-1 policy-pap | =========|_|==============|___/=/_/_/_/ grafana | logger=migrator t=2024-02-29T23:14:02.729009171Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.323851ms policy-db-migrator | policyadmin 0 policy-api | [2024-02-29T23:14:19.020+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" simulator | 2024-02-29 23:14:06,506 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | :: Spring Boot :: (v3.1.8) grafana | logger=migrator t=2024-02-29T23:14:02.73246382Z level=info msg="Executing migration" id="Add is_service_account column to user" policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-api | [2024-02-29T23:14:20.927+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. simulator | 2024-02-29 23:14:06,513 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1785ms mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-pap | grafana | logger=migrator t=2024-02-29T23:14:02.73366845Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.20366ms policy-db-migrator | upgrade: 0 -> 1300 policy-api | [2024-02-29T23:14:21.029+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 91 ms. Found 6 JPA repository interfaces. simulator | 2024-02-29 23:14:06,514 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4264 ms. mariadb | done kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-pap | [2024-02-29T23:14:32.932+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) grafana | logger=migrator t=2024-02-29T23:14:02.739812641Z level=info msg="Executing migration" id="Update is_service_account column to nullable" policy-db-migrator | policy-api | [2024-02-29T23:14:21.493+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler simulator | 2024-02-29 23:14:06,523 INFO org.onap.policy.models.simulators starting SDNC simulator mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | [2024-02-29 23:14:13,157] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-pap | [2024-02-29T23:14:32.935+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" grafana | logger=migrator t=2024-02-29T23:14:02.752907499Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=13.095888ms policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-api | [2024-02-29T23:14:21.494+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler simulator | 2024-02-29 23:14:06,525 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | group.id = 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f policy-apex-pdp | group.instance.id = null policy-pap | [2024-02-29T23:14:35.018+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. grafana | logger=migrator t=2024-02-29T23:14:02.756724701Z level=info msg="Executing migration" id="create temp user table v1-7" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:22.267+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) simulator | 2024-02-29 23:14:06,526 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-pap | [2024-02-29T23:14:35.155+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 125 ms. Found 7 JPA repository interfaces. grafana | logger=migrator t=2024-02-29T23:14:02.757401536Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=676.415µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-api | [2024-02-29T23:14:22.279+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] simulator | 2024-02-29 23:14:06,526 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | [2024-02-29T23:14:35.611+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-02-29T23:14:02.760799605Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:22.282+00:00|INFO|StandardService|main] Starting service [Tomcat] simulator | 2024-02-29 23:14:06,528 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' kafka | [2024-02-29 23:14:13,157] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | [2024-02-29T23:14:35.612+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-02-29T23:14:02.761629862Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=829.627µs policy-db-migrator | policy-api | [2024-02-29T23:14:22.282+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] simulator | 2024-02-29 23:14:06,541 INFO Session workerName=node0 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | [2024-02-29 23:14:13,157] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-pap | [2024-02-29T23:14:36.402+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) grafana | logger=migrator t=2024-02-29T23:14:02.767328889Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" policy-db-migrator | policy-api | [2024-02-29T23:14:22.424+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext simulator | 2024-02-29 23:14:06,607 INFO Using GSON for REST calls mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | [2024-02-29 23:14:13,157] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-pap | [2024-02-29T23:14:36.412+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-02-29T23:14:02.768116935Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=787.366µs policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-api | [2024-02-29T23:14:22.424+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3316 ms simulator | 2024-02-29 23:14:06,620 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-pap | [2024-02-29T23:14:36.415+00:00|INFO|StandardService|main] Starting service [Tomcat] grafana | logger=migrator t=2024-02-29T23:14:02.772427581Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:22.923+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] simulator | 2024-02-29 23:14:06,621 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-02-29 23:14:06,622 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1893ms kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | [2024-02-29T23:14:36.415+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] grafana | logger=migrator t=2024-02-29T23:14:02.773226717Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=798.596µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-api | [2024-02-29T23:14:23.027+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | [2024-02-29 23:14:13,157] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) simulator | 2024-02-29 23:14:06,622 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4904 ms. policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 policy-pap | [2024-02-29T23:14:36.515+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext grafana | logger=migrator t=2024-02-29T23:14:02.776930348Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:23.031+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | [2024-02-29 23:14:13,160] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) simulator | 2024-02-29 23:14:06,623 INFO org.onap.policy.models.simulators starting SO simulator policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-pap | [2024-02-29T23:14:36.516+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3490 ms grafana | logger=migrator t=2024-02-29T23:14:02.777755325Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=824.097µs policy-db-migrator | policy-api | [2024-02-29T23:14:23.085+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' kafka | [2024-02-29 23:14:13,164] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) simulator | 2024-02-29 23:14:06,627 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-pap | [2024-02-29T23:14:37.012+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] grafana | logger=migrator t=2024-02-29T23:14:02.783336571Z level=info msg="Executing migration" id="Update temp_user table charset" policy-db-migrator | policy-api | [2024-02-29T23:14:23.466+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | [2024-02-29 23:14:13,169] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) simulator | 2024-02-29 23:14:06,627 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | retry.backoff.ms = 100 prometheus | ts=2024-02-29T23:14:01.596Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=81.521µs wal_replay_duration=518.107µs wbl_replay_duration=320ns total_replay_duration=633.119µs policy-pap | [2024-02-29T23:14:37.106+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 grafana | logger=migrator t=2024-02-29T23:14:02.783385362Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=49.161µs policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-api | [2024-02-29T23:14:23.488+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | [2024-02-29 23:14:13,176] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) simulator | 2024-02-29 23:14:06,632 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | sasl.client.callback.handler.class = null prometheus | ts=2024-02-29T23:14:01.599Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC policy-pap | [2024-02-29T23:14:37.110+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer grafana | logger=migrator t=2024-02-29T23:14:02.786109814Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:23.598+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' kafka | [2024-02-29 23:14:13,191] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) simulator | 2024-02-29 23:14:06,633 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-apex-pdp | sasl.jaas.config = null prometheus | ts=2024-02-29T23:14:01.599Z caller=main.go:1142 level=info msg="TSDB started" policy-pap | [2024-02-29T23:14:37.172+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled grafana | logger=migrator t=2024-02-29T23:14:02.787713338Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.609424ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-api | [2024-02-29T23:14:23.601+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | [2024-02-29 23:14:13,192] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) simulator | 2024-02-29 23:14:06,635 INFO Session workerName=node0 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit prometheus | ts=2024-02-29T23:14:01.599Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml policy-pap | [2024-02-29T23:14:37.606+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer grafana | logger=migrator t=2024-02-29T23:14:02.79156876Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:25.687+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp kafka | [2024-02-29 23:14:13,204] INFO Socket connection established, initiating session, client: /172.17.0.8:33806, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) simulator | 2024-02-29 23:14:06,718 INFO Using GSON for REST calls policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 prometheus | ts=2024-02-29T23:14:01.600Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.116515ms db_storage=1.79µs remote_storage=2.53µs web_handler=720ns query_engine=1.83µs scrape=284.334µs scrape_sd=148.342µs notify=33.271µs notify_sd=15.79µs rules=2.52µs tracing=7.05µs policy-pap | [2024-02-29T23:14:37.632+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... grafana | logger=migrator t=2024-02-29T23:14:02.792740449Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.18067ms policy-db-migrator | policy-api | [2024-02-29T23:14:25.692+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' kafka | [2024-02-29 23:14:13,239] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000396720000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) simulator | 2024-02-29 23:14:06,731 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} policy-apex-pdp | sasl.kerberos.service.name = null prometheus | ts=2024-02-29T23:14:01.601Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." policy-pap | [2024-02-29T23:14:37.765+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7b6e5c12 grafana | logger=migrator t=2024-02-29T23:14:02.798578128Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" policy-db-migrator | policy-api | [2024-02-29T23:14:26.867+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' kafka | [2024-02-29 23:14:13,362] INFO Session: 0x100000396720000 closed (org.apache.zookeeper.ZooKeeper) simulator | 2024-02-29 23:14:06,733 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 prometheus | ts=2024-02-29T23:14:01.601Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." policy-pap | [2024-02-29T23:14:37.767+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. grafana | logger=migrator t=2024-02-29T23:14:02.799294594Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=716.276µs policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-api | [2024-02-29T23:14:27.720+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] mariadb | kafka | [2024-02-29 23:14:13,362] INFO EventThread shut down for session: 0x100000396720000 (org.apache.zookeeper.ClientCnxn) simulator | 2024-02-29 23:14:06,733 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @2004ms policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | [2024-02-29T23:14:40.002+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) grafana | logger=migrator t=2024-02-29T23:14:02.804061423Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:28.941+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning kafka | Using log4j config /etc/kafka/log4j.properties mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" simulator | 2024-02-29 23:14:06,734 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4898 ms. policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | [2024-02-29T23:14:40.006+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' grafana | logger=migrator t=2024-02-29T23:14:02.805218303Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.167599ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-api | [2024-02-29T23:14:29.150+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@607c7f58, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4bbb00a4, org.springframework.security.web.context.SecurityContextHolderFilter@6e11d059, org.springframework.security.web.header.HeaderWriterFilter@1d123972, org.springframework.security.web.authentication.logout.LogoutFilter@54e1e8a7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@206d4413, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@19bd1f98, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@69cf9acb, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@543d242e, org.springframework.security.web.access.ExceptionTranslationFilter@5b3063b7, org.springframework.security.web.access.intercept.AuthorizationFilter@407bfc49] kafka | ===> Launching ... simulator | 2024-02-29 23:14:06,736 INFO org.onap.policy.models.simulators starting VFC simulator policy-apex-pdp | sasl.login.class = null policy-pap | [2024-02-29T23:14:40.654+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository grafana | logger=migrator t=2024-02-29T23:14:02.809494438Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:30.083+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' kafka | ===> Launching kafka ... mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' simulator | 2024-02-29 23:14:06,742 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | [2024-02-29T23:14:41.125+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository grafana | logger=migrator t=2024-02-29T23:14:02.81335333Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.859702ms policy-db-migrator | policy-api | [2024-02-29T23:14:30.223+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | [2024-02-29 23:14:14,130] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql simulator | 2024-02-29 23:14:06,742 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | [2024-02-29T23:14:41.258+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository grafana | logger=migrator t=2024-02-29T23:14:02.819015917Z level=info msg="Executing migration" id="create temp_user v2" policy-db-migrator | policy-api | [2024-02-29T23:14:30.249+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' kafka | [2024-02-29 23:14:14,523] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp simulator | 2024-02-29 23:14:06,743 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | [2024-02-29T23:14:41.569+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-02-29T23:14:02.819899884Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=883.217µs policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-api | [2024-02-29T23:14:30.268+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.073 seconds (process running for 12.692) kafka | [2024-02-29 23:14:14,600] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) mariadb | simulator | 2024-02-29 23:14:06,744 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-02-29T23:14:02.823868947Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:39.928+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' kafka | [2024-02-29 23:14:14,601] INFO starting (kafka.server.KafkaServer) mariadb | 2024-02-29 23:14:09+00:00 [Note] [Entrypoint]: Stopping temporary server simulator | 2024-02-29 23:14:06,789 INFO Session workerName=node0 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-02-29T23:14:02.824714224Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=844.857µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-api | [2024-02-29T23:14:39.928+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' kafka | [2024-02-29 23:14:14,601] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) mariadb | 2024-02-29 23:14:09 0 [Note] mariadbd (initiated by: unknown): Normal shutdown simulator | 2024-02-29 23:14:06,847 INFO Using GSON for REST calls policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-02-29T23:14:02.828769178Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" policy-db-migrator | -------------- policy-api | [2024-02-29T23:14:39.930+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms kafka | [2024-02-29 23:14:14,615] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: FTS optimize thread exiting. simulator | 2024-02-29 23:14:06,863 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-02-29T23:14:02.829655005Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=885.007µs policy-db-migrator | policy-api | [2024-02-29T23:14:49.615+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: kafka | [2024-02-29 23:14:14,619] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Starting shutdown... simulator | 2024-02-29 23:14:06,865 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-02-29T23:14:02.833659668Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" policy-db-migrator | policy-api | [] kafka | [2024-02-29 23:14:14,620] INFO Client environment:host.name=c7bee733818e (org.apache.zookeeper.ZooKeeper) mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool simulator | 2024-02-29 23:14:06,865 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @2137ms policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | check.crcs = true grafana | logger=migrator t=2024-02-29T23:14:02.834515365Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=855.397µs policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Buffer pool(s) dump completed at 240229 23:14:09 simulator | 2024-02-29 23:14:06,866 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4877 ms. policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-02-29T23:14:02.83992006Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" policy-db-migrator | -------------- kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" simulator | 2024-02-29 23:14:06,870 INFO org.onap.policy.models.simulators started policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | client.id = consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-1 grafana | logger=migrator t=2024-02-29T23:14:02.840778157Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=857.767µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) mariadb | 2024-02-29 23:14:09 0 [Note] InnoDB: Shutdown completed; log sequence number 347334; transaction id 298 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | client.rack = grafana | logger=migrator t=2024-02-29T23:14:02.844524848Z level=info msg="Executing migration" id="copy temp_user v1 to v2" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:09 0 [Note] mariadbd: Shutdown complete kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-29T23:14:02.845023132Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=497.834µs policy-db-migrator | mariadb | kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-02-29T23:14:02.849092286Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" policy-db-migrator | mariadb | 2024-02-29 23:14:09+00:00 [Note] [Entrypoint]: Temporary server stopped kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-02-29T23:14:02.850193145Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.100379ms policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql mariadb | kafka | [2024-02-29 23:14:14,620] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-02-29T23:14:02.85808985Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:09+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-02-29T23:14:02.858579824Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=489.334µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) mariadb | kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-02-29T23:14:02.861845402Z level=info msg="Executing migration" id="create star table" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-02-29T23:14:02.86288132Z level=info msg="Migration successfully executed" id="create star table" duration=1.033798ms policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 kafka | [2024-02-29 23:14:14,620] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | group.id = ee5900cb-eee5-431a-a953-12f2e7174bf4 grafana | logger=migrator t=2024-02-29T23:14:02.867204206Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Number of transaction pools: 1 kafka | [2024-02-29 23:14:14,620] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | security.providers = null policy-pap | group.instance.id = null grafana | logger=migrator t=2024-02-29T23:14:02.868629538Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.425132ms policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions kafka | [2024-02-29 23:14:14,620] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-02-29T23:14:02.875803227Z level=info msg="Executing migration" id="create org table v1" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | session.timeout.ms = 45000 policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-02-29T23:14:02.876623184Z level=info msg="Migration successfully executed" id="create org table v1" duration=819.127µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) mariadb | 2024-02-29 23:14:10 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-29T23:14:02.881714406Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF kafka | [2024-02-29 23:14:14,620] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-29T23:14:02.883133898Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.426822ms policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB kafka | [2024-02-29 23:14:14,622] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | ssl.cipher.suites = null policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-29T23:14:02.88695114Z level=info msg="Executing migration" id="create org_user table v1" policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Completed initialization of buffer pool kafka | [2024-02-29 23:14:14,626] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-29T23:14:02.88818012Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.22441ms policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) kafka | [2024-02-29 23:14:14,633] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-29T23:14:02.892248443Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: 128 rollback segments are active. kafka | [2024-02-29 23:14:14,635] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) policy-apex-pdp | ssl.engine.factory.class = null policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-29T23:14:02.893639705Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.390862ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-apex-pdp | ssl.key.password = null policy-pap | max.poll.records = 500 kafka | [2024-02-29 23:14:14,643] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-02-29T23:14:02.899393002Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | metadata.max.age.ms = 300000 kafka | [2024-02-29 23:14:14,651] INFO Socket connection established, initiating session, client: /172.17.0.8:33808, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-02-29T23:14:02.90034582Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=952.278µs policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: log sequence number 347334; transaction id 299 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | metric.reporters = [] kafka | [2024-02-29 23:14:14,661] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000396720001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-02-29T23:14:02.903641658Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-apex-pdp | ssl.keystore.key = null policy-pap | metrics.num.samples = 2 kafka | [2024-02-29 23:14:14,666] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) grafana | logger=migrator t=2024-02-29T23:14:02.904877688Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.23597ms policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql mariadb | 2024-02-29 23:14:10 0 [Note] Plugin 'FEEDBACK' is disabled. policy-apex-pdp | ssl.keystore.location = null policy-pap | metrics.recording.level = INFO kafka | [2024-02-29 23:14:14,986] INFO Cluster ID = FqFLOU6jRgiQltXq-uD-BA (kafka.server.KafkaServer) grafana | logger=migrator t=2024-02-29T23:14:02.908517578Z level=info msg="Executing migration" id="Update org table charset" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-apex-pdp | ssl.keystore.password = null policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-02-29 23:14:14,991] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) grafana | logger=migrator t=2024-02-29T23:14:02.908582628Z level=info msg="Migration successfully executed" id="Update org table charset" duration=52.64µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) mariadb | 2024-02-29 23:14:10 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-apex-pdp | ssl.keystore.type = JKS policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-02-29 23:14:15,046] INFO KafkaConfig values: grafana | logger=migrator t=2024-02-29T23:14:02.912219219Z level=info msg="Executing migration" id="Update org_user table charset" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Note] Server socket created on IP: '0.0.0.0'. policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | receive.buffer.bytes = 65536 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 grafana | logger=migrator t=2024-02-29T23:14:02.912261499Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=43.38µs policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] Server socket created on IP: '::'. policy-apex-pdp | ssl.provider = null policy-pap | reconnect.backoff.max.ms = 1000 kafka | alter.config.policy.class.name = null grafana | logger=migrator t=2024-02-29T23:14:02.918905954Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" policy-db-migrator | mariadb | 2024-02-29 23:14:10 0 [Note] mariadbd: ready for connections. policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | reconnect.backoff.ms = 50 kafka | alter.log.dirs.replication.quota.window.num = 11 grafana | logger=migrator t=2024-02-29T23:14:02.919182556Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=275.802µs policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | request.timeout.ms = 30000 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 grafana | logger=migrator t=2024-02-29T23:14:02.922780166Z level=info msg="Executing migration" id="create dashboard table" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 0 [Note] InnoDB: Buffer pool(s) load completed at 240229 23:14:10 policy-apex-pdp | ssl.truststore.certificates = null policy-pap | retry.backoff.ms = 100 kafka | authorizer.class.name = grafana | logger=migrator t=2024-02-29T23:14:02.924143047Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.341571ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) mariadb | 2024-02-29 23:14:10 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) policy-apex-pdp | ssl.truststore.location = null policy-pap | sasl.client.callback.handler.class = null kafka | auto.create.topics.enable = true grafana | logger=migrator t=2024-02-29T23:14:02.928488443Z level=info msg="Executing migration" id="add index dashboard.account_id" policy-db-migrator | -------------- mariadb | 2024-02-29 23:14:10 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) policy-apex-pdp | ssl.truststore.password = null policy-pap | sasl.jaas.config = null kafka | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-02-29T23:14:02.929926225Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.445202ms policy-db-migrator | mariadb | 2024-02-29 23:14:10 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) policy-apex-pdp | ssl.truststore.type = JKS policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | auto.leader.rebalance.enable = true grafana | logger=migrator t=2024-02-29T23:14:02.933526985Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" policy-db-migrator | mariadb | 2024-02-29 23:14:10 10 [Warning] Aborted connection 10 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | background.threads = 10 grafana | logger=migrator t=2024-02-29T23:14:02.934526143Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=998.618µs policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-apex-pdp | policy-pap | sasl.kerberos.service.name = null kafka | broker.heartbeat.interval.ms = 2000 grafana | logger=migrator t=2024-02-29T23:14:02.940834486Z level=info msg="Executing migration" id="create dashboard_tag table" policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:14:45.173+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | broker.id = 1 grafana | logger=migrator t=2024-02-29T23:14:02.942030466Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.1949ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-apex-pdp | [2024-02-29T23:14:45.173+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | broker.id.generation.enable = true grafana | logger=migrator t=2024-02-29T23:14:02.946264421Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:14:45.173+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248485171 policy-pap | sasl.login.callback.handler.class = null kafka | broker.rack = null grafana | logger=migrator t=2024-02-29T23:14:02.947680032Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.414801ms policy-db-migrator | policy-apex-pdp | [2024-02-29T23:14:45.176+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-1, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Subscribed to topic(s): policy-pdp-pap policy-pap | sasl.login.class = null kafka | broker.session.timeout.ms = 9000 grafana | logger=migrator t=2024-02-29T23:14:02.951582765Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" policy-db-migrator | policy-apex-pdp | [2024-02-29T23:14:45.194+00:00|INFO|ServiceManager|main] service manager starting policy-pap | sasl.login.connect.timeout.ms = null kafka | client.quota.callback.class = null grafana | logger=migrator t=2024-02-29T23:14:02.952451042Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=862.517µs policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-apex-pdp | [2024-02-29T23:14:45.194+00:00|INFO|ServiceManager|main] service manager starting topics policy-pap | sasl.login.read.timeout.ms = null kafka | compression.type = producer grafana | logger=migrator t=2024-02-29T23:14:02.958823415Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:14:45.204+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | connection.failed.authentication.delay.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:02.968004221Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.181536ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-02-29T23:14:45.232+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | connections.max.idle.ms = 600000 grafana | logger=migrator t=2024-02-29T23:14:02.971912883Z level=info msg="Executing migration" id="create dashboard v2" policy-db-migrator | -------------- policy-apex-pdp | allow.auto.create.topics = true policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | connections.max.reauth.ms = 0 grafana | logger=migrator t=2024-02-29T23:14:02.97271012Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=796.547µs policy-db-migrator | policy-apex-pdp | auto.commit.interval.ms = 5000 policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | control.plane.listener.name = null grafana | logger=migrator t=2024-02-29T23:14:02.976089238Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" policy-db-migrator | policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | controlled.shutdown.enable = true grafana | logger=migrator t=2024-02-29T23:14:02.976906654Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=816.996µs policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-apex-pdp | auto.offset.reset = latest policy-pap | sasl.login.retry.backoff.ms = 100 kafka | controlled.shutdown.max.retries = 3 policy-db-migrator | -------------- policy-apex-pdp | bootstrap.servers = [kafka:9092] kafka | controlled.shutdown.retry.backoff.ms = 5000 policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | check.crcs = true grafana | logger=migrator t=2024-02-29T23:14:02.983733031Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" kafka | controller.listener.names = null policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- policy-apex-pdp | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-02-29T23:14:02.98485871Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.123589ms kafka | controller.quorum.append.linger.ms = 25 policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | policy-apex-pdp | client.id = consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2 grafana | logger=migrator t=2024-02-29T23:14:02.988640562Z level=info msg="Executing migration" id="copy dashboard v1 to v2" kafka | controller.quorum.election.backoff.max.ms = 1000 policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-02-29T23:14:02.989288907Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=647.155µs kafka | controller.quorum.election.timeout.ms = 1000 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-02-29T23:14:02.992927647Z level=info msg="Executing migration" id="drop table dashboard_v1" kafka | controller.quorum.fetch.timeout.ms = 2000 policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:02.993999156Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.072179ms kafka | controller.quorum.request.timeout.ms = 2000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-29T23:14:02.999991916Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" kafka | controller.quorum.retry.backoff.ms = 20 policy-apex-pdp | enable.auto.commit = true policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.000056836Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=68.86µs kafka | controller.quorum.voters = [] policy-apex-pdp | exclude.internal.topics = true policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.003708756Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" policy-apex-pdp | fetch.max.bytes = 52428800 policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | kafka | controller.quota.window.num = 11 grafana | logger=migrator t=2024-02-29T23:14:03.006676623Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.966827ms policy-apex-pdp | fetch.max.wait.ms = 500 policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | controller.quota.window.size.seconds = 1 grafana | logger=migrator t=2024-02-29T23:14:03.05784908Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-apex-pdp | fetch.min.bytes = 1 policy-pap | security.protocol = PLAINTEXT kafka | controller.socket.timeout.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:03.060713429Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.859229ms policy-db-migrator | -------------- policy-apex-pdp | group.id = 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f policy-pap | security.providers = null kafka | create.topic.policy.class.name = null grafana | logger=migrator t=2024-02-29T23:14:03.064211294Z level=info msg="Executing migration" id="Add column gnetId in dashboard" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | group.instance.id = null policy-pap | send.buffer.bytes = 131072 kafka | default.replication.factor = 1 grafana | logger=migrator t=2024-02-29T23:14:03.066037652Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.825798ms policy-db-migrator | -------------- policy-apex-pdp | heartbeat.interval.ms = 3000 policy-pap | session.timeout.ms = 45000 kafka | delegation.token.expiry.check.interval.ms = 3600000 grafana | logger=migrator t=2024-02-29T23:14:03.072424555Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" policy-db-migrator | policy-apex-pdp | interceptor.classes = [] policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | delegation.token.expiry.time.ms = 86400000 grafana | logger=migrator t=2024-02-29T23:14:03.073787729Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.362474ms policy-db-migrator | policy-apex-pdp | internal.leave.group.on.close = true policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | delegation.token.master.key = null grafana | logger=migrator t=2024-02-29T23:14:03.077310554Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | ssl.cipher.suites = null kafka | delegation.token.max.lifetime.ms = 604800000 grafana | logger=migrator t=2024-02-29T23:14:03.079438755Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.137141ms policy-db-migrator | -------------- policy-apex-pdp | isolation.level = read_uncommitted policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | delegation.token.secret.key = null grafana | logger=migrator t=2024-02-29T23:14:03.083197503Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | ssl.endpoint.identification.algorithm = https kafka | delete.records.purgatory.purge.interval.requests = 1 policy-db-migrator | -------------- policy-apex-pdp | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-29T23:14:03.084063601Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=860.228µs policy-pap | ssl.engine.factory.class = null kafka | delete.topic.enable = true policy-db-migrator | policy-apex-pdp | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-29T23:14:03.090256223Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" policy-pap | ssl.key.password = null kafka | early.start.listeners = null policy-db-migrator | policy-apex-pdp | max.poll.records = 500 grafana | logger=migrator t=2024-02-29T23:14:03.091373094Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.116511ms policy-pap | ssl.keymanager.algorithm = SunX509 kafka | fetch.max.bytes = 57671680 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-apex-pdp | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-29T23:14:03.094784128Z level=info msg="Executing migration" id="Update dashboard table charset" policy-pap | ssl.keystore.certificate.chain = null kafka | fetch.purgatory.purge.interval.requests = 1000 policy-db-migrator | -------------- policy-apex-pdp | metric.reporters = [] grafana | logger=migrator t=2024-02-29T23:14:03.094815468Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=32.02µs policy-pap | ssl.keystore.key = null kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-29T23:14:03.098414334Z level=info msg="Executing migration" id="Update dashboard_tag table charset" policy-pap | ssl.keystore.location = null kafka | group.consumer.heartbeat.interval.ms = 5000 policy-db-migrator | -------------- policy-apex-pdp | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-29T23:14:03.098446885Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=33.461µs policy-pap | ssl.keystore.password = null kafka | group.consumer.max.heartbeat.interval.ms = 15000 policy-db-migrator | policy-apex-pdp | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:03.105245552Z level=info msg="Executing migration" id="Add column folder_id in dashboard" policy-pap | ssl.keystore.type = JKS kafka | group.consumer.max.session.timeout.ms = 60000 policy-db-migrator | policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-29T23:14:03.107829528Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.581416ms policy-pap | ssl.protocol = TLSv1.3 kafka | group.consumer.max.size = 2147483647 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-apex-pdp | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-29T23:14:03.115091531Z level=info msg="Executing migration" id="Add column isFolder in dashboard" policy-pap | ssl.provider = null kafka | group.consumer.min.heartbeat.interval.ms = 5000 policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-29T23:14:03.11701023Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.92101ms policy-pap | ssl.secure.random.implementation = null kafka | group.consumer.min.session.timeout.ms = 45000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-29T23:14:03.120717677Z level=info msg="Executing migration" id="Add column has_acl in dashboard" policy-pap | ssl.trustmanager.algorithm = PKIX kafka | group.consumer.session.timeout.ms = 45000 policy-db-migrator | -------------- policy-apex-pdp | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:03.122681306Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.963139ms policy-pap | ssl.truststore.certificates = null kafka | group.coordinator.new.enable = false policy-db-migrator | policy-apex-pdp | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:03.128945419Z level=info msg="Executing migration" id="Add column uid in dashboard" policy-pap | ssl.truststore.location = null kafka | group.coordinator.threads = 1 policy-db-migrator | policy-apex-pdp | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-29T23:14:03.131445464Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.497935ms policy-pap | ssl.truststore.password = null kafka | group.initial.rebalance.delay.ms = 3000 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-apex-pdp | sasl.jaas.config = null policy-pap | ssl.truststore.type = JKS kafka | group.max.session.timeout.ms = 1800000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.135525884Z level=info msg="Executing migration" id="Update uid column values in dashboard" policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | group.max.size = 2147483647 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-29T23:14:03.136003099Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=476.835µs policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | kafka | group.min.session.timeout.ms = 6000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.139922728Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | [2024-02-29T23:14:41.774+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | initial.broker.registration.timeout.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.141330032Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.406764ms policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | [2024-02-29T23:14:41.774+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | inter.broker.listener.name = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.146498644Z level=info msg="Executing migration" id="Remove unique index org_id_slug" policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | [2024-02-29T23:14:41.774+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248481772 kafka | inter.broker.protocol.version = 3.6-IV2 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-02-29T23:14:03.147799167Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.300543ms policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | [2024-02-29T23:14:41.777+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-1, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Subscribed to topic(s): policy-pdp-pap kafka | kafka.metrics.polling.interval.secs = 10 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.151400363Z level=info msg="Executing migration" id="Update dashboard title length" policy-apex-pdp | sasl.login.class = null policy-pap | [2024-02-29T23:14:41.778+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | kafka.metrics.reporters = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-02-29T23:14:03.151429353Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=26.65µs policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | allow.auto.create.topics = true kafka | leader.imbalance.check.interval.seconds = 300 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.15414002Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | auto.commit.interval.ms = 5000 kafka | leader.imbalance.per.broker.percentage = 10 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.154964818Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=821.588µs policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | auto.include.jmx.reporter = true kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.160607434Z level=info msg="Executing migration" id="create dashboard_provisioning" policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | auto.offset.reset = latest kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql grafana | logger=migrator t=2024-02-29T23:14:03.163016188Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=2.396394ms policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | bootstrap.servers = [kafka:9092] kafka | log.cleaner.backoff.ms = 15000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.166816686Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | check.crcs = true kafka | log.cleaner.dedupe.buffer.size = 134217728 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-29T23:14:03.174102339Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.276723ms policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | client.dns.lookup = use_all_dns_ips kafka | log.cleaner.delete.retention.ms = 86400000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.176939457Z level=info msg="Executing migration" id="create dashboard_provisioning v2" policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | client.id = consumer-policy-pap-2 kafka | log.cleaner.enable = true policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.177573783Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=634.186µs policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | client.rack = kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.181652754Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | connections.max.idle.ms = 540000 kafka | log.cleaner.io.buffer.size = 524288 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql grafana | logger=migrator t=2024-02-29T23:14:03.183371391Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.716447ms policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | default.api.timeout.ms = 60000 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.187036688Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | enable.auto.commit = true kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-29T23:14:03.188429872Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.392334ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | exclude.internal.topics = true kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.191959667Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | fetch.max.bytes = 52428800 kafka | log.cleaner.min.compaction.lag.ms = 0 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.19226378Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=308.173µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | fetch.max.wait.ms = 500 kafka | log.cleaner.threads = 1 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.195589513Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | fetch.min.bytes = 1 kafka | log.cleanup.policy = [delete] policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-02-29T23:14:03.19621817Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=626.226µs policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | group.id = policy-pap kafka | log.dir = /tmp/kafka-logs policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.20130154Z level=info msg="Executing migration" id="Add check_sum column" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | group.instance.id = null kafka | log.dirs = /var/lib/kafka/data grafana | logger=migrator t=2024-02-29T23:14:03.204729074Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.426524ms policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) grafana | logger=migrator t=2024-02-29T23:14:03.20830234Z level=info msg="Executing migration" id="Add index for dashboard_title" policy-pap | interceptor.classes = [] policy-db-migrator | -------------- kafka | log.flush.interval.messages = 9223372036854775807 grafana | logger=migrator t=2024-02-29T23:14:03.209126098Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=823.398µs policy-db-migrator | kafka | log.flush.interval.ms = null policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-02-29T23:14:03.215113598Z level=info msg="Executing migration" id="delete tags for deleted dashboards" policy-db-migrator | kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-apex-pdp | security.providers = null policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-02-29T23:14:03.215385601Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=272.212µs policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-02-29T23:14:03.218959846Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-db-migrator | -------------- kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-apex-pdp | session.timeout.ms = 45000 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-29T23:14:03.219225809Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=266.683µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) kafka | log.index.interval.bytes = 4096 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-02-29T23:14:03.222883105Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-db-migrator | -------------- kafka | log.index.size.max.bytes = 10485760 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-02-29T23:14:03.224171618Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.288433ms policy-db-migrator | kafka | log.local.retention.bytes = -2 policy-apex-pdp | ssl.cipher.suites = null policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-02-29T23:14:03.227655723Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-db-migrator | kafka | log.local.retention.ms = -2 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-29T23:14:03.231088377Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.432934ms policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql kafka | log.message.downconversion.enable = true policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-29T23:14:03.237033956Z level=info msg="Executing migration" id="create data_source table" policy-db-migrator | -------------- kafka | log.message.format.version = 3.0-IV1 policy-apex-pdp | ssl.engine.factory.class = null policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-29T23:14:03.237931325Z level=info msg="Migration successfully executed" id="create data_source table" duration=896.659µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | log.message.timestamp.after.max.ms = 9223372036854775807 policy-apex-pdp | ssl.key.password = null policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-29T23:14:03.24135875Z level=info msg="Executing migration" id="add index data_source.account_id" policy-db-migrator | -------------- kafka | log.message.timestamp.before.max.ms = 9223372036854775807 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:03.242614632Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.255552ms policy-db-migrator | kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-02-29T23:14:03.246246358Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" policy-db-migrator | kafka | log.message.timestamp.type = CreateTime policy-apex-pdp | ssl.keystore.key = null policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-02-29T23:14:03.247531391Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.279733ms policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | log.preallocate = false policy-apex-pdp | ssl.keystore.location = null policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-29T23:14:03.253299728Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-db-migrator | -------------- kafka | log.retention.bytes = -1 policy-apex-pdp | ssl.keystore.password = null policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-29T23:14:03.254114106Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=814.168µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | log.retention.check.interval.ms = 300000 policy-apex-pdp | ssl.keystore.type = JKS policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:03.258078986Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" policy-db-migrator | -------------- kafka | log.retention.hours = 168 policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:03.259259618Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.173802ms policy-db-migrator | policy-apex-pdp | ssl.provider = null policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-02-29T23:14:03.262783763Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" kafka | log.retention.minutes = null policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-02-29T23:14:03.271896924Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.113611ms policy-db-migrator | kafka | log.retention.ms = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-02-29T23:14:03.277455709Z level=info msg="Executing migration" id="create data_source table v2" policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | log.roll.hours = 168 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-02-29T23:14:03.278305038Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=848.779µs policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.certificates = null policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-02-29T23:14:03.282030365Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | log.roll.jitter.hours = 0 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-29T23:14:03.282910094Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=874.589µs policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.location = null policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-02-29T23:14:03.288780202Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" policy-db-migrator | policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-02-29T23:14:03.289735192Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=954.36µs policy-db-migrator | kafka | log.roll.jitter.ms = null policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-29T23:14:03.29355687Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql kafka | log.roll.ms = null policy-pap | sasl.login.class = null policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-29T23:14:03.294423579Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=866.858µs policy-db-migrator | -------------- kafka | log.segment.bytes = 1073741824 policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | grafana | logger=migrator t=2024-02-29T23:14:03.298340627Z level=info msg="Executing migration" id="Add column with_credentials" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | log.segment.delete.delay.ms = 60000 policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-02-29T23:14:03.302123275Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.783038ms kafka | max.connection.creation.rate = 2147483647 policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | [2024-02-29T23:14:45.241+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.307948423Z level=info msg="Executing migration" id="Add secure json data column" kafka | max.connections = 2147483647 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | [2024-02-29T23:14:45.241+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.310359927Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.410984ms kafka | max.connections.per.ip = 2147483647 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | [2024-02-29T23:14:45.241+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248485241 policy-db-migrator | kafka | max.connections.per.ip.overrides = policy-apex-pdp | [2024-02-29T23:14:45.242+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql grafana | logger=migrator t=2024-02-29T23:14:03.314295637Z level=info msg="Executing migration" id="Update data_source table charset" policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | max.incremental.fetch.session.cache.slots = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.314340967Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=52.62µs policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-02-29T23:14:45.245+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=30ab67d0-1072-4fed-bd59-8343130e1fdb, alive=false, publisher=null]]: starting kafka | message.max.bytes = 1048588 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-29T23:14:03.318199706Z level=info msg="Executing migration" id="Update initial version to 1" policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | [2024-02-29T23:14:45.264+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | metadata.log.dir = null grafana | logger=migrator t=2024-02-29T23:14:03.318478238Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=289.172µs policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | acks = -1 policy-db-migrator | -------------- kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.322362257Z level=info msg="Executing migration" id="Add read_only data column" kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | batch.size = 16384 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.326000453Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.637506ms policy-apex-pdp | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-02-29T23:14:03.331870092Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" kafka | metadata.log.segment.bytes = 1073741824 policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-apex-pdp | buffer.memory = 33554432 grafana | logger=migrator t=2024-02-29T23:14:03.332046763Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=176.691µs kafka | metadata.log.segment.min.bytes = 8388608 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- policy-apex-pdp | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-02-29T23:14:03.335747271Z level=info msg="Executing migration" id="Update json_data with nulls" kafka | metadata.log.segment.ms = 604800000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-apex-pdp | client.id = producer-1 grafana | logger=migrator t=2024-02-29T23:14:03.335989703Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=242.543µs kafka | metadata.max.idle.interval.ms = 500 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-apex-pdp | compression.type = none grafana | logger=migrator t=2024-02-29T23:14:03.340109124Z level=info msg="Executing migration" id="Add uid column" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | policy-apex-pdp | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-02-29T23:14:03.34374735Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.625106ms kafka | metadata.max.retention.bytes = 104857600 policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | policy-apex-pdp | delivery.timeout.ms = 120000 grafana | logger=migrator t=2024-02-29T23:14:03.347321356Z level=info msg="Executing migration" id="Update uid value" kafka | metadata.max.retention.ms = 604800000 policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-apex-pdp | enable.idempotence = true grafana | logger=migrator t=2024-02-29T23:14:03.347491228Z level=info msg="Migration successfully executed" id="Update uid value" duration=167.352µs kafka | metric.reporters = [] policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- policy-apex-pdp | interceptor.classes = [] grafana | logger=migrator t=2024-02-29T23:14:03.35272696Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" kafka | metrics.num.samples = 2 policy-pap | security.protocol = PLAINTEXT policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-29T23:14:03.353968142Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.234462ms kafka | metrics.recording.level = INFO policy-pap | security.providers = null policy-db-migrator | -------------- policy-apex-pdp | linger.ms = 0 grafana | logger=migrator t=2024-02-29T23:14:03.357342226Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" kafka | metrics.sample.window.ms = 30000 policy-pap | send.buffer.bytes = 131072 policy-db-migrator | policy-apex-pdp | max.block.ms = 60000 kafka | min.insync.replicas = 1 policy-pap | session.timeout.ms = 45000 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.35872574Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.382733ms policy-apex-pdp | max.in.flight.requests.per.connection = 5 kafka | node.id = 1 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-apex-pdp | max.request.size = 1048576 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.362715159Z level=info msg="Executing migration" id="create api_key table" kafka | num.io.threads = 8 policy-apex-pdp | metadata.max.age.ms = 300000 policy-pap | ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-02-29T23:14:03.363418096Z level=info msg="Migration successfully executed" id="create api_key table" duration=702.457µs kafka | num.network.threads = 3 policy-db-migrator | -------------- kafka | num.partitions = 1 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-29T23:14:03.36978561Z level=info msg="Executing migration" id="add index api_key.account_id" policy-db-migrator | kafka | num.recovery.threads.per.data.dir = 1 policy-apex-pdp | metric.reporters = [] policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-29T23:14:03.371001792Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.215502ms policy-db-migrator | kafka | num.replica.alter.log.dirs.threads = null policy-apex-pdp | metrics.num.samples = 2 policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-29T23:14:03.374580767Z level=info msg="Executing migration" id="add index api_key.key" policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql kafka | num.replica.fetchers = 1 policy-apex-pdp | metrics.recording.level = INFO policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-29T23:14:03.37585489Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.272443ms policy-db-migrator | -------------- policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null kafka | offset.metadata.max.bytes = 4096 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) kafka | offsets.commit.required.acks = -1 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-02-29T23:14:03.379467526Z level=info msg="Executing migration" id="add index api_key.account_id_name" policy-pap | ssl.keystore.key = null policy-db-migrator | -------------- kafka | offsets.commit.timeout.ms = 5000 policy-apex-pdp | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-02-29T23:14:03.380335385Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=866.649µs policy-pap | ssl.keystore.location = null policy-db-migrator | kafka | offsets.load.buffer.size = 5242880 policy-apex-pdp | partitioner.class = null grafana | logger=migrator t=2024-02-29T23:14:03.390262464Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | offsets.retention.check.interval.ms = 600000 policy-apex-pdp | partitioner.ignore.keys = false grafana | logger=migrator t=2024-02-29T23:14:03.391460466Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.197402ms policy-pap | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql kafka | offsets.retention.minutes = 10080 policy-apex-pdp | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-02-29T23:14:03.395051411Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | offsets.topic.compression.codec = 0 policy-apex-pdp | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-29T23:14:03.396226413Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.174342ms policy-pap | ssl.provider = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | offsets.topic.num.partitions = 50 policy-apex-pdp | reconnect.backoff.ms = 50 policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.402611127Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" kafka | offsets.topic.replication.factor = 1 policy-apex-pdp | request.timeout.ms = 30000 policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.403773229Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.158561ms kafka | offsets.topic.segment.bytes = 104857600 policy-apex-pdp | retries = 2147483647 policy-pap | ssl.truststore.certificates = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.407623067Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-apex-pdp | retry.backoff.ms = 100 policy-pap | ssl.truststore.location = null policy-db-migrator | > upgrade 0450-pdpgroup.sql grafana | logger=migrator t=2024-02-29T23:14:03.416341834Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.717937ms kafka | password.encoder.iterations = 4096 policy-apex-pdp | sasl.client.callback.handler.class = null policy-pap | ssl.truststore.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.420288733Z level=info msg="Executing migration" id="create api_key table v2" kafka | password.encoder.key.length = 128 policy-apex-pdp | sasl.jaas.config = null policy-pap | ssl.truststore.type = JKS policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) grafana | logger=migrator t=2024-02-29T23:14:03.42095978Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=670.677µs kafka | password.encoder.keyfactory.algorithm = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.476859957Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" kafka | password.encoder.old.secret = null policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.47815764Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.301853ms kafka | password.encoder.secret = null policy-apex-pdp | sasl.kerberos.service.name = null policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.482053039Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql grafana | logger=migrator t=2024-02-29T23:14:03.48320947Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.155421ms kafka | process.roles = [] policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248481784 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.487546624Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" kafka | producer.id.expiration.check.interval.ms = 600000 policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | [2024-02-29T23:14:41.784+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-29T23:14:03.48913178Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.588946ms kafka | producer.id.expiration.ms = 86400000 policy-apex-pdp | sasl.login.class = null policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:42.177+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json grafana | logger=migrator t=2024-02-29T23:14:03.495946978Z level=info msg="Executing migration" id="copy api_key v1 to v2" kafka | producer.purgatory.purge.interval.requests = 1000 policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-db-migrator | policy-pap | [2024-02-29T23:14:42.339+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning grafana | logger=migrator t=2024-02-29T23:14:03.496298601Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=351.714µs kafka | queued.max.request.bytes = -1 policy-apex-pdp | sasl.login.read.timeout.ms = null policy-db-migrator | policy-pap | [2024-02-29T23:14:42.595+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@53917c92, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1fa796a4, org.springframework.security.web.context.SecurityContextHolderFilter@1f013047, org.springframework.security.web.header.HeaderWriterFilter@ce0bbd5, org.springframework.security.web.authentication.logout.LogoutFilter@44c2e8a8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4fbbd98c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@51566ce0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@17e6d07b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@68de8522, org.springframework.security.web.access.ExceptionTranslationFilter@1f7557fe, org.springframework.security.web.access.intercept.AuthorizationFilter@3879feec] grafana | logger=migrator t=2024-02-29T23:14:03.499838426Z level=info msg="Executing migration" id="Drop old table api_key_v1" kafka | queued.max.requests = 500 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | > upgrade 0470-pdp.sql policy-pap | [2024-02-29T23:14:43.482+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' grafana | logger=migrator t=2024-02-29T23:14:03.500702095Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=863.489µs kafka | quota.window.num = 11 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:43.609+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | quota.window.size.seconds = 1 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | [2024-02-29T23:14:43.635+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-02-29T23:14:03.506985778Z level=info msg="Executing migration" id="Update api_key table charset" policy-pap | [2024-02-29T23:14:43.656+00:00|INFO|ServiceManager|main] Policy PAP starting kafka | remote.log.manager.task.interval.ms = 30000 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:03.507025558Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=40.5µs policy-pap | [2024-02-29T23:14:43.656+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-db-migrator | policy-apex-pdp | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:03.510858746Z level=info msg="Executing migration" id="Add expires to api_key table" policy-pap | [2024-02-29T23:14:43.657+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-db-migrator | policy-apex-pdp | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-29T23:14:03.514861286Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.00085ms policy-pap | [2024-02-29T23:14:43.657+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener kafka | remote.log.manager.task.retry.jitter = 0.2 policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-29T23:14:03.518315631Z level=info msg="Executing migration" id="Add service account foreign key" policy-pap | [2024-02-29T23:14:43.657+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher kafka | remote.log.manager.thread.pool.size = 10 policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-29T23:14:03.520728095Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.415365ms policy-pap | [2024-02-29T23:14:43.658+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher kafka | remote.log.metadata.custom.metadata.max.bytes = 128 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-29T23:14:03.524203419Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-pap | [2024-02-29T23:14:43.658+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-29T23:14:03.524362071Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=158.382µs policy-pap | [2024-02-29T23:14:43.662+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ee5900cb-eee5-431a-a953-12f2e7174bf4, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3ff3275b kafka | remote.log.metadata.manager.class.path = null policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:03.529133968Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-pap | [2024-02-29T23:14:43.674+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ee5900cb-eee5-431a-a953-12f2e7174bf4, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:03.531562813Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.428395ms policy-pap | [2024-02-29T23:14:43.674+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | remote.log.metadata.manager.listener.name = null policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:03.53530098Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-pap | allow.auto.create.topics = true kafka | remote.log.reader.max.pending.tasks = 100 policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-29T23:14:03.537856275Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.553035ms policy-pap | auto.commit.interval.ms = 5000 kafka | remote.log.reader.threads = 10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-29T23:14:03.541164258Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-pap | auto.include.jmx.reporter = true kafka | remote.log.storage.manager.class.name = null policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:03.541910086Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=749.158µs policy-pap | auto.offset.reset = latest kafka | remote.log.storage.manager.class.path = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.54733353Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | bootstrap.servers = [kafka:9092] kafka | remote.log.storage.manager.impl.prefix = rsm.config. policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.547873735Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=539.895µs policy-apex-pdp | security.providers = null policy-pap | check.crcs = true kafka | remote.log.storage.system.enable = false policy-db-migrator | > upgrade 0500-pdpsubgroup.sql grafana | logger=migrator t=2024-02-29T23:14:03.551644393Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | client.dns.lookup = use_all_dns_ips kafka | replica.fetch.backoff.ms = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.552760674Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.111771ms policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | client.id = consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3 kafka | replica.fetch.max.bytes = 1048576 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-29T23:14:03.557334009Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | client.rack = kafka | replica.fetch.min.bytes = 1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.558581442Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.246633ms policy-apex-pdp | ssl.cipher.suites = null policy-pap | connections.max.idle.ms = 540000 kafka | replica.fetch.response.max.bytes = 10485760 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.563925065Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | default.api.timeout.ms = 60000 kafka | replica.fetch.wait.max.ms = 500 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.564736173Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=810.698µs policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | enable.auto.commit = true kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql grafana | logger=migrator t=2024-02-29T23:14:03.56845598Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-apex-pdp | ssl.engine.factory.class = null policy-pap | exclude.internal.topics = true kafka | replica.lag.time.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.56947561Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.02086ms policy-apex-pdp | ssl.key.password = null policy-pap | fetch.max.bytes = 52428800 kafka | replica.selector.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) grafana | logger=migrator t=2024-02-29T23:14:03.573321729Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | fetch.max.wait.ms = 500 kafka | replica.socket.receive.buffer.bytes = 65536 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.57345813Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=137.451µs policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | fetch.min.bytes = 1 kafka | replica.socket.timeout.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.579142777Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-apex-pdp | ssl.keystore.key = null policy-pap | group.id = ee5900cb-eee5-431a-a953-12f2e7174bf4 kafka | replication.quota.window.num = 11 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.579204207Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=62.86µs policy-apex-pdp | ssl.keystore.location = null policy-pap | group.instance.id = null kafka | replication.quota.window.size.seconds = 1 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql grafana | logger=migrator t=2024-02-29T23:14:03.58349572Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-apex-pdp | ssl.keystore.password = null policy-pap | heartbeat.interval.ms = 3000 kafka | request.timeout.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.588131976Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.631376ms policy-apex-pdp | ssl.keystore.type = JKS policy-pap | interceptor.classes = [] kafka | reserved.broker.max.id = 1000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) grafana | logger=migrator t=2024-02-29T23:14:03.592088336Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | internal.leave.group.on.close = true kafka | sasl.client.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.596936454Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=4.852998ms policy-apex-pdp | ssl.provider = null policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | sasl.enabled.mechanisms = [GSSAPI] policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.602118696Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | isolation.level = read_uncommitted kafka | sasl.jaas.config = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.602178876Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=60.61µs policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql grafana | logger=migrator t=2024-02-29T23:14:03.605008865Z level=info msg="Executing migration" id="create quota table v1" policy-apex-pdp | ssl.truststore.certificates = null policy-pap | max.partition.fetch.bytes = 1048576 kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.605474429Z level=info msg="Migration successfully executed" id="create quota table v1" duration=465.224µs policy-apex-pdp | ssl.truststore.location = null policy-pap | max.poll.interval.ms = 300000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | ssl.truststore.password = null grafana | logger=migrator t=2024-02-29T23:14:03.609577861Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-pap | max.poll.records = 500 kafka | sasl.kerberos.service.name = null policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-29T23:14:03.611383388Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.804577ms policy-pap | metadata.max.age.ms = 300000 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | policy-apex-pdp | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-02-29T23:14:03.616835693Z level=info msg="Executing migration" id="Update quota table charset" policy-pap | metric.reporters = [] kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | policy-apex-pdp | transactional.id = null grafana | logger=migrator t=2024-02-29T23:14:03.616864653Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=30.08µs policy-pap | metrics.num.samples = 2 kafka | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-29T23:14:03.621513879Z level=info msg="Executing migration" id="create plugin_setting table" policy-pap | metrics.recording.level = INFO kafka | sasl.login.class = null policy-db-migrator | -------------- policy-apex-pdp | grafana | logger=migrator t=2024-02-29T23:14:03.622256257Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=741.968µs policy-pap | metrics.sample.window.ms = 30000 kafka | sasl.login.connect.timeout.ms = null policy-apex-pdp | [2024-02-29T23:14:45.278+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-02-29T23:14:03.625131105Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) kafka | sasl.login.read.timeout.ms = null policy-apex-pdp | [2024-02-29T23:14:45.300+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-02-29T23:14:03.626024444Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=884.439µs policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | -------------- kafka | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | [2024-02-29T23:14:45.300+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-29T23:14:03.628877683Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | kafka | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248485300 grafana | logger=migrator t=2024-02-29T23:14:03.631833672Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.955689ms policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | kafka | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=30ab67d0-1072-4fed-bd59-8343130e1fdb, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-02-29T23:14:03.63666208Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-pap | request.timeout.ms = 30000 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql kafka | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|ServiceManager|main] service manager starting set alive grafana | logger=migrator t=2024-02-29T23:14:03.636687921Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.731µs policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-02-29T23:14:45.301+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object grafana | logger=migrator t=2024-02-29T23:14:03.640015494Z level=info msg="Executing migration" id="create session table" policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) kafka | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | [2024-02-29T23:14:45.304+00:00|INFO|ServiceManager|main] service manager starting topic sinks grafana | logger=migrator t=2024-02-29T23:14:03.640805832Z level=info msg="Migration successfully executed" id="create session table" duration=790.068µs policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- kafka | sasl.mechanism.controller.protocol = GSSAPI policy-apex-pdp | [2024-02-29T23:14:45.304+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher grafana | logger=migrator t=2024-02-29T23:14:03.648709271Z level=info msg="Executing migration" id="Drop old table playlist table" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener grafana | logger=migrator t=2024-02-29T23:14:03.648852122Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=138.892µs policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-pap | sasl.kerberos.service.name = null policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql kafka | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-29T23:14:03.654358787Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- kafka | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-29T23:14:03.654446218Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=82.631µs policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-29T23:14:03.657525198Z level=info msg="Executing migration" id="create playlist table v2" policy-apex-pdp | [2024-02-29T23:14:45.318+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:03.658288546Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=765.928µs policy-apex-pdp | [2024-02-29T23:14:45.319+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-pap | sasl.login.class = null policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:03.661840041Z level=info msg="Executing migration" id="create playlist item table v2" policy-apex-pdp | [2024-02-29T23:14:45.335+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:03.66266309Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=822.409µs policy-apex-pdp | [] policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | > upgrade 0570-toscadatatype.sql kafka | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-29T23:14:03.666214905Z level=info msg="Executing migration" id="Update playlist table charset" policy-apex-pdp | [2024-02-29T23:14:45.338+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-29T23:14:03.666244515Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=30.31µs policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"bd7b6aa1-3559-4908-8969-b03734dbc54b","timestampMs":1709248485318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:03.67074645Z level=info msg="Executing migration" id="Update playlist_item table charset" policy-apex-pdp | [2024-02-29T23:14:45.590+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | sasl.server.callback.handler.class = null grafana | logger=migrator t=2024-02-29T23:14:03.670774731Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=29.301µs policy-apex-pdp | [2024-02-29T23:14:45.591+00:00|INFO|ServiceManager|main] service manager starting policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.server.max.receive.size = 524288 policy-apex-pdp | [2024-02-29T23:14:45.591+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:03.68677336Z level=info msg="Executing migration" id="Add playlist column created_at" kafka | security.inter.broker.protocol = PLAINTEXT policy-db-migrator | policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:03.692248705Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.473275ms policy-apex-pdp | [2024-02-29T23:14:45.591+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | security.providers = null policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-29T23:14:03.700535657Z level=info msg="Executing migration" id="Add playlist column updated_at" policy-apex-pdp | [2024-02-29T23:14:45.619+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-29T23:14:03.702753459Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.216912ms policy-apex-pdp | [2024-02-29T23:14:45.619+00:00|INFO|ServiceManager|main] service manager started kafka | server.max.startup.time.ms = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-29T23:14:03.707245584Z level=info msg="Executing migration" id="drop preferences table v2" policy-apex-pdp | [2024-02-29T23:14:45.620+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. kafka | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.issuer = null kafka | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | [2024-02-29T23:14:45.619+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.707414206Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=168.102µs policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | socket.listen.backlog.size = 50 policy-apex-pdp | [2024-02-29T23:14:45.726+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FqFLOU6jRgiQltXq-uD-BA policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.709966811Z level=info msg="Executing migration" id="drop preferences table v3" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | socket.receive.buffer.bytes = 102400 policy-apex-pdp | [2024-02-29T23:14:45.726+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Cluster ID: FqFLOU6jRgiQltXq-uD-BA policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql grafana | logger=migrator t=2024-02-29T23:14:03.710049792Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=82.921µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | socket.request.max.bytes = 104857600 policy-apex-pdp | [2024-02-29T23:14:45.728+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.713260174Z level=info msg="Executing migration" id="create preferences table v3" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | socket.send.buffer.bytes = 102400 policy-apex-pdp | [2024-02-29T23:14:45.728+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-29T23:14:03.714093992Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=833.698µs policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | ssl.cipher.suites = [] policy-apex-pdp | [2024-02-29T23:14:45.737+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] (Re-)joining group policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.719566997Z level=info msg="Executing migration" id="Update preferences table charset" policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | ssl.client.auth = none policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.719595247Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=28.55µs policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | [2024-02-29T23:14:45.753+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Request joining group due to: need to re-join with the given member-id: consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.730451206Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-pap | security.protocol = PLAINTEXT kafka | ssl.endpoint.identification.algorithm = https policy-apex-pdp | [2024-02-29T23:14:45.754+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-pap | security.providers = null policy-apex-pdp | [2024-02-29T23:14:45.754+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] (Re-)joining group policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.736414665Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.962259ms kafka | ssl.engine.factory.class = null policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | [2024-02-29T23:14:46.296+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) grafana | logger=migrator t=2024-02-29T23:14:03.740464685Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-apex-pdp | [2024-02-29T23:14:46.298+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls grafana | logger=migrator t=2024-02-29T23:14:03.740739048Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=274.823µs kafka | ssl.key.password = null policy-pap | session.timeout.ms = 45000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.744761258Z level=info msg="Executing migration" id="Add column week_start in preferences" policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | policy-apex-pdp | [2024-02-29T23:14:48.761+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Successfully joined group with generation Generation{generationId=1, memberId='consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025', protocol='range'} kafka | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-29T23:14:03.747897169Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.134461ms policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | policy-apex-pdp | [2024-02-29T23:14:48.768+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Finished assignment for group at generation 1: {consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025=Assignment(partitions=[policy-pdp-pap-0])} kafka | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-29T23:14:03.752867179Z level=info msg="Executing migration" id="Add column preferences.json_data" policy-pap | ssl.cipher.suites = null policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-apex-pdp | [2024-02-29T23:14:48.802+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Successfully synced group in generation Generation{generationId=1, memberId='consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025', protocol='range'} kafka | ssl.keystore.key = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | [2024-02-29T23:14:48.802+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) kafka | ssl.keystore.location = null grafana | logger=migrator t=2024-02-29T23:14:03.756263923Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.396744ms policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https kafka | ssl.keystore.password = null grafana | logger=migrator t=2024-02-29T23:14:03.760819688Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-apex-pdp | [2024-02-29T23:14:48.804+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | ssl.engine.factory.class = null kafka | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-29T23:14:03.760885209Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=66.171µs policy-db-migrator | -------------- policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-29T23:14:03.765909369Z level=info msg="Executing migration" id="Add preferences index org_id" policy-db-migrator | policy-apex-pdp | [2024-02-29T23:14:48.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Found no committed offset for partition policy-pdp-pap-0 kafka | ssl.principal.mapping.rules = DEFAULT policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-02-29T23:14:03.766798778Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=886.409µs policy-db-migrator | policy-apex-pdp | [2024-02-29T23:14:48.819+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2, groupId=9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. kafka | ssl.protocol = TLSv1.3 policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-29T23:14:03.772557965Z level=info msg="Executing migration" id="Add preferences index user_id" policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-apex-pdp | [2024-02-29T23:14:56.171+00:00|INFO|RequestLog|qtp1068445309-32] 172.17.0.2 - policyadmin [29/Feb/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10653 "-" "Prometheus/2.50.1" kafka | ssl.provider = null grafana | logger=migrator t=2024-02-29T23:14:03.773694997Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.136512ms policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:15:05.319+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | ssl.keystore.key = null kafka | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-29T23:14:03.783306002Z level=info msg="Executing migration" id="create alert table v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} policy-pap | ssl.keystore.location = null kafka | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:15:05.343+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | ssl.truststore.certificates = null policy-pap | ssl.keystore.password = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} policy-db-migrator | kafka | ssl.truststore.location = null policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-02-29T23:14:03.784548765Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.246783ms policy-apex-pdp | [2024-02-29T23:15:05.346+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | kafka | ssl.truststore.password = null policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-02-29T23:14:03.793980959Z level=info msg="Executing migration" id="add index alert org_id & id " policy-apex-pdp | [2024-02-29T23:15:05.518+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | ssl.truststore.type = JKS policy-pap | ssl.provider = null grafana | logger=migrator t=2024-02-29T23:14:03.795839087Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.857448ms policy-apex-pdp | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-02-29T23:14:03.806398763Z level=info msg="Executing migration" id="add index alert state" kafka | transaction.max.timeout.ms = 900000 grafana | logger=migrator t=2024-02-29T23:14:03.807759286Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.361693ms policy-apex-pdp | [2024-02-29T23:15:05.528+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-pap | ssl.trustmanager.algorithm = PKIX kafka | transaction.partition.verification.enable = true policy-apex-pdp | [2024-02-29T23:15:05.528+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-02-29T23:14:03.811571464Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-29T23:14:03.812820997Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.255173ms kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-db-migrator | policy-apex-pdp | [2024-02-29T23:15:05.529+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-02-29T23:14:03.816594754Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" kafka | transaction.state.log.load.buffer.size = 5242880 policy-db-migrator | policy-pap | ssl.truststore.password = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-29T23:14:03.8171656Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=570.556µs kafka | transaction.state.log.min.isr = 2 policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-02-29T23:14:03.821536253Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" kafka | transaction.state.log.num.partitions = 50 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | [2024-02-29T23:15:05.547+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.822394362Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=857.679µs kafka | transaction.state.log.replication.factor = 3 policy-pap | policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-02-29T23:14:03.825951368Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" kafka | transaction.state.log.segment.bytes = 104857600 policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-apex-pdp | [2024-02-29T23:15:05.547+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.826541923Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=590.475µs kafka | transactional.id.expiration.ms = 604800000 policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | policy-apex-pdp | [2024-02-29T23:15:05.556+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-02-29T23:14:03.830401352Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" kafka | unclean.leader.election.enable = false policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483681 policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-29T23:14:03.842504883Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=12.10274ms kafka | unstable.api.versions.enable = false policy-pap | [2024-02-29T23:14:43.681+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-apex-pdp | [2024-02-29T23:15:05.556+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-02-29T23:14:03.847439532Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" kafka | zookeeper.clientCnxnSocket = null policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:15:05.601+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-02-29T23:14:03.847889436Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=449.474µs policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=90a6ce0d-d2c8-411b-a6c6-dec263368d9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2ea0161f policy-apex-pdp | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-29T23:14:03.85130506Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" kafka | zookeeper.connect = zookeeper:2181 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=90a6ce0d-d2c8-411b-a6c6-dec263368d9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting grafana | logger=migrator t=2024-02-29T23:14:03.851948067Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=643.107µs kafka | zookeeper.connection.timeout.ms = null policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:15:05.604+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | [2024-02-29T23:14:43.682+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-02-29T23:14:03.855452462Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" kafka | zookeeper.max.in.flight.requests = 10 policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-02-29T23:14:03.855869706Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=417.084µs kafka | zookeeper.metadata.migration.enable = false policy-db-migrator | policy-apex-pdp | [2024-02-29T23:15:05.616+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | auto.commit.interval.ms = 5000 kafka | zookeeper.session.timeout.ms = 18000 policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-29T23:14:03.861077938Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-pap | auto.include.jmx.reporter = true kafka | zookeeper.set.acl = false policy-db-migrator | -------------- policy-apex-pdp | [2024-02-29T23:15:05.617+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-02-29T23:14:03.862132878Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.05477ms policy-pap | auto.offset.reset = latest kafka | zookeeper.ssl.cipher.suites = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | [2024-02-29T23:15:05.636+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-02-29T23:14:03.866023377Z level=info msg="Executing migration" id="create alert_notification table v1" policy-pap | bootstrap.servers = [kafka:9092] kafka | zookeeper.ssl.client.enable = false policy-db-migrator | -------------- policy-apex-pdp | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-29T23:14:03.867070978Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.046961ms kafka | zookeeper.ssl.crl.enable = false policy-apex-pdp | [2024-02-29T23:15:05.642+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-02-29T23:14:03.906048616Z level=info msg="Executing migration" id="Add column is_default" policy-pap | check.crcs = true policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-02-29T23:14:03.911560821Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.512525ms policy-apex-pdp | [2024-02-29T23:15:05.655+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | client.id = consumer-policy-pap-4 policy-db-migrator | > upgrade 0670-toscapolicies.sql kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-02-29T23:14:03.918855164Z level=info msg="Executing migration" id="Add column frequency" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | client.rack = policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-02-29T23:14:03.921296698Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.440824ms policy-apex-pdp | [2024-02-29T23:15:05.655+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-02-29T23:14:03.924933385Z level=info msg="Executing migration" id="Add column send_reminder" policy-apex-pdp | [2024-02-29T23:15:56.084+00:00|INFO|RequestLog|qtp1068445309-29] 172.17.0.2 - policyadmin [29/Feb/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10652 "-" "Prometheus/2.50.1" policy-pap | default.api.timeout.ms = 60000 policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-02-29T23:14:03.930425399Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.491044ms policy-pap | enable.auto.commit = true policy-db-migrator | kafka | zookeeper.ssl.ocsp.enable = false grafana | logger=migrator t=2024-02-29T23:14:03.934383809Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-pap | exclude.internal.topics = true policy-db-migrator | kafka | zookeeper.ssl.protocol = TLSv1.2 grafana | logger=migrator t=2024-02-29T23:14:03.937795713Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.417684ms policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql kafka | zookeeper.ssl.truststore.location = null grafana | logger=migrator t=2024-02-29T23:14:03.943113796Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.943970904Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=855.888µs policy-pap | fetch.min.bytes = 1 kafka | zookeeper.ssl.truststore.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-02-29T23:14:03.949894553Z level=info msg="Executing migration" id="Update alert table charset" policy-pap | group.id = policy-pap kafka | zookeeper.ssl.truststore.type = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.949972804Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=73.031µs policy-pap | group.instance.id = null kafka | (kafka.server.KafkaConfig) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.954090635Z level=info msg="Executing migration" id="Update alert_notification table charset" policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-02-29 23:14:15,079] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.954139996Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=42.87µs policy-pap | interceptor.classes = [] policy-db-migrator | > upgrade 0690-toscapolicy.sql grafana | logger=migrator t=2024-02-29T23:14:03.957252567Z level=info msg="Executing migration" id="create notification_journal table v1" kafka | [2024-02-29 23:14:15,080] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.958132415Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=879.898µs kafka | [2024-02-29 23:14:15,081] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) grafana | logger=migrator t=2024-02-29T23:14:03.964535699Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" kafka | [2024-02-29 23:14:15,085] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | isolation.level = read_uncommitted policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.966304857Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.768608ms kafka | [2024-02-29 23:14:15,119] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.970100775Z level=info msg="Executing migration" id="drop alert_notification_journal" kafka | [2024-02-29 23:14:15,123] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.971283937Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.182552ms kafka | [2024-02-29 23:14:15,132] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | > upgrade 0700-toscapolicytype.sql grafana | logger=migrator t=2024-02-29T23:14:03.977714101Z level=info msg="Executing migration" id="create alert_notification_state table v1" kafka | [2024-02-29 23:14:15,134] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-pap | max.poll.records = 500 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.979041354Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.331293ms kafka | [2024-02-29 23:14:15,135] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) grafana | logger=migrator t=2024-02-29T23:14:03.982916762Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" kafka | [2024-02-29 23:14:15,147] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:03.983890792Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=973.85µs kafka | [2024-02-29 23:14:15,194] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-pap | metric.reporters = [] policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.988763311Z level=info msg="Executing migration" id="Add for to alert table" kafka | [2024-02-29 23:14:15,245] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-pap | metrics.num.samples = 2 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:03.992598639Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.834988ms kafka | [2024-02-29 23:14:15,297] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-pap | metrics.recording.level = INFO policy-db-migrator | > upgrade 0710-toscapolicytypes.sql grafana | logger=migrator t=2024-02-29T23:14:03.996005913Z level=info msg="Executing migration" id="Add column uid in alert_notification" kafka | [2024-02-29 23:14:15,331] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:15,718] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-02-29T23:14:04.000012673Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.00721ms policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) kafka | [2024-02-29 23:14:15,740] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) grafana | logger=migrator t=2024-02-29T23:14:04.004363176Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:15,741] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-02-29T23:14:04.004539878Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=176.802µs policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | kafka | [2024-02-29 23:14:15,746] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) grafana | logger=migrator t=2024-02-29T23:14:04.007751276Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | kafka | [2024-02-29 23:14:15,751] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-02-29T23:14:04.008582833Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=831.257µs policy-pap | request.timeout.ms = 30000 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql kafka | [2024-02-29 23:14:15,776] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-29T23:14:04.014880848Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:15,778] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-29T23:14:04.015685635Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=804.317µs policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-29 23:14:15,782] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-29T23:14:04.019203545Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- kafka | [2024-02-29 23:14:15,784] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-29T23:14:04.023026608Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.822763ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | kafka | [2024-02-29 23:14:15,785] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-29T23:14:04.029825926Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | kafka | [2024-02-29 23:14:15,801] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) grafana | logger=migrator t=2024-02-29T23:14:04.029946267Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=120.151µs policy-db-migrator | > upgrade 0730-toscaproperty.sql kafka | [2024-02-29 23:14:15,802] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-02-29T23:14:04.046079376Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-db-migrator | -------------- kafka | [2024-02-29 23:14:15,826] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-02-29T23:14:04.046817942Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=738.756µs kafka | [2024-02-29 23:14:15,857] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1709248455845,1709248455845,1,0,0,72057609446883329,258,0,27 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-02-29T23:14:04.051443732Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- kafka | (kafka.zk.KafkaZkClient) policy-db-migrator | kafka | [2024-02-29 23:14:15,858] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-02-29T23:14:04.052132738Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=686.706µs policy-pap | sasl.login.class = null policy-db-migrator | kafka | [2024-02-29 23:14:15,915] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) grafana | logger=migrator t=2024-02-29T23:14:04.061602399Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql kafka | [2024-02-29 23:14:15,924] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-29 23:14:15,930] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-29T23:14:04.06168022Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=79.581µs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-29 23:14:15,931] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-02-29T23:14:04.069571318Z level=info msg="Executing migration" id="create annotation table v5" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-02-29 23:14:15,945] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-02-29T23:14:04.070206883Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=637.575µs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-02-29 23:14:15,947] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-02-29T23:14:04.074306168Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-02-29 23:14:15,958] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-29T23:14:04.075613709Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.307671ms policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:04.082018754Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql kafka | [2024-02-29 23:14:15,960] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:04.083131444Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.1123ms policy-db-migrator | -------------- kafka | [2024-02-29 23:14:15,963] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-02-29T23:14:04.090308516Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) kafka | [2024-02-29 23:14:15,968] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-02-29T23:14:04.091244494Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=862.037µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:15,982] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-02-29T23:14:04.09554701Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-db-migrator | kafka | [2024-02-29 23:14:15,987] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-02-29T23:14:04.096254567Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=706.817µs policy-db-migrator | kafka | [2024-02-29 23:14:15,987] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-29T23:14:04.100663244Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql kafka | [2024-02-29 23:14:16,000] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:04.101374471Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=711.036µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:16,000] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:04.11175494Z level=info msg="Executing migration" id="Update annotation table charset" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-02-29 23:14:16,006] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:04.11180795Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=54.06µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:16,010] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-29T23:14:04.115518902Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-db-migrator | kafka | [2024-02-29 23:14:16,013] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-29T23:14:04.120616165Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.098613ms policy-db-migrator | kafka | [2024-02-29 23:14:16,031] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:04.12584368Z level=info msg="Executing migration" id="Drop category_id index" policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-02-29 23:14:16,036] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-29T23:14:04.126676247Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=831.947µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:16,039] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | security.providers = null grafana | logger=migrator t=2024-02-29T23:14:04.133108262Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | [2024-02-29 23:14:16,042] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-29T23:14:04.137170297Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.060165ms policy-db-migrator | -------------- kafka | [2024-02-29 23:14:16,060] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-02-29T23:14:04.143451541Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-db-migrator | kafka | [2024-02-29 23:14:16,061] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:04.144063266Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=611.335µs policy-db-migrator | kafka | [2024-02-29 23:14:16,061] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:04.1480346Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-02-29T23:14:04.149443842Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.408942ms kafka | [2024-02-29 23:14:16,062] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-02-29T23:14:04.156333111Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" kafka | [2024-02-29 23:14:16,062] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-02-29T23:14:04.157829454Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.496053ms kafka | [2024-02-29 23:14:16,065] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-02-29T23:14:04.162655915Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" kafka | [2024-02-29 23:14:16,065] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-db-migrator | policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-02-29T23:14:04.179667111Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=17.012306ms kafka | [2024-02-29 23:14:16,065] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-29 23:14:16,066] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-02-29T23:14:04.18421557Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql kafka | [2024-02-29 23:14:16,066] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-29T23:14:04.184700284Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=484.624µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:16,071] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-02-29T23:14:04.189693967Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" kafka | [2024-02-29 23:14:16,083] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-02-29T23:14:04.190539524Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=846.797µs policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | [2024-02-29 23:14:16,084] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-02-29T23:14:04.197554104Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-db-migrator | kafka | [2024-02-29 23:14:16,089] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) grafana | logger=migrator t=2024-02-29T23:14:04.198051448Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=497.084µs policy-pap | ssl.keystore.type = JKS policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:16,094] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-02-29T23:14:04.204214171Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-pap | ssl.provider = null kafka | [2024-02-29 23:14:16,095] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) grafana | logger=migrator t=2024-02-29T23:14:04.205055218Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=839.667µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-29 23:14:16,095] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) grafana | logger=migrator t=2024-02-29T23:14:04.212023648Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-29 23:14:16,096] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) grafana | logger=migrator t=2024-02-29T23:14:04.21231971Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=294.192µs policy-db-migrator | policy-pap | ssl.truststore.certificates = null kafka | [2024-02-29 23:14:16,099] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) grafana | logger=migrator t=2024-02-29T23:14:04.217048091Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | kafka | [2024-02-29 23:14:16,099] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-02-29T23:14:04.222990331Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.94329ms policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-02-29T23:14:04.227169217Z level=info msg="Executing migration" id="Add updated time to annotation table" kafka | [2024-02-29 23:14:16,100] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-02-29T23:14:04.233851295Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.683138ms kafka | [2024-02-29 23:14:16,101] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-db-migrator | -------------- policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-02-29T23:14:04.238650445Z level=info msg="Executing migration" id="Add index for created in annotation table" kafka | [2024-02-29 23:14:16,102] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) policy-db-migrator | policy-pap | grafana | logger=migrator t=2024-02-29T23:14:04.239281131Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=630.056µs kafka | [2024-02-29 23:14:16,106] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) policy-db-migrator | policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-02-29T23:14:04.246937487Z level=info msg="Executing migration" id="Add index for updated in annotation table" kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-29T23:14:04.248343129Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.405022ms kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483687 grafana | logger=migrator t=2024-02-29T23:14:04.253379772Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-02-29T23:14:04.253779625Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=396.673µs kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|ServiceManager|main] Policy PAP starting topics grafana | logger=migrator t=2024-02-29T23:14:04.260661514Z level=info msg="Executing migration" id="Add epoch_end column" kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.267135909Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.473715ms kafka | [2024-02-29 23:14:16,112] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=90a6ce0d-d2c8-411b-a6c6-dec263368d9a, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.271740789Z level=info msg="Executing migration" id="Add index for epoch_end" kafka | [2024-02-29 23:14:16,112] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ee5900cb-eee5-431a-a953-12f2e7174bf4, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.272369924Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=628.925µs kafka | [2024-02-29 23:14:16,112] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-pap | [2024-02-29T23:14:43.687+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f77f3cef-1815-4905-9b5a-40d6087ec71b, alive=false, publisher=null]]: starting policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.279874738Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" kafka | [2024-02-29 23:14:16,113] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-pap | [2024-02-29T23:14:43.706+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) grafana | logger=migrator t=2024-02-29T23:14:04.280138301Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=264.013µs kafka | [2024-02-29 23:14:16,113] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-pap | acks = -1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.336883536Z level=info msg="Executing migration" id="Move region to single row" kafka | [2024-02-29 23:14:16,114] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-pap | auto.include.jmx.reporter = true policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.337924695Z level=info msg="Migration successfully executed" id="Move region to single row" duration=1.033879ms kafka | [2024-02-29 23:14:16,115] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-pap | batch.size = 16384 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.344398881Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" kafka | [2024-02-29 23:14:16,116] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql grafana | logger=migrator t=2024-02-29T23:14:04.346169546Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.771185ms kafka | [2024-02-29 23:14:16,128] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.351342Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" kafka | [2024-02-29 23:14:16,128] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) grafana | logger=migrator t=2024-02-29T23:14:04.352257518Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=906.288µs kafka | [2024-02-29 23:14:16,129] INFO Kafka startTimeMs: 1709248456123 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | client.id = producer-1 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:16,147] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-pap | compression.type = none grafana | logger=migrator t=2024-02-29T23:14:04.356308102Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" kafka | [2024-02-29 23:14:16,179] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-29T23:14:04.357301191Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=992.489µs policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-29 23:14:16,215] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-02-29T23:14:04.365864464Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-pap | delivery.timeout.ms = 120000 kafka | [2024-02-29 23:14:16,285] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.367295827Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.429963ms policy-pap | enable.idempotence = true kafka | [2024-02-29 23:14:16,354] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-02-29T23:14:04.372843254Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-pap | interceptor.classes = [] kafka | [2024-02-29 23:14:16,358] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-02-29T23:14:04.374292266Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.448172ms policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-29 23:14:21,181] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-29T23:14:04.380474619Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-pap | linger.ms = 0 kafka | [2024-02-29 23:14:21,181] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) grafana | logger=migrator t=2024-02-29T23:14:04.381988832Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.514423ms policy-pap | max.block.ms = 60000 kafka | [2024-02-29 23:14:44,341] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-02-29T23:14:04.389539137Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-02-29T23:14:04.389677928Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=139.551µs kafka | [2024-02-29 23:14:44,342] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-02-29T23:14:04.395507558Z level=info msg="Executing migration" id="create test_data table" kafka | [2024-02-29 23:14:44,354] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-02-29T23:14:04.396777409Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.270651ms kafka | [2024-02-29 23:14:44,363] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-02-29T23:14:04.40152366Z level=info msg="Executing migration" id="create dashboard_version table v1" kafka | [2024-02-29 23:14:44,388] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(j4DaYO3UQ1iVwjuKp7Abhw),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(Fk26_aqxRF-nlCfGN2xAXQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-02-29T23:14:04.40275063Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.227381ms kafka | [2024-02-29 23:14:44,390] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-02-29T23:14:04.412405513Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-db-migrator | kafka | [2024-02-29 23:14:44,406] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-02-29T23:14:04.413819215Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.412072ms policy-db-migrator | kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:04.421006066Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-02-29T23:14:04.422883992Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.876166ms policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-02-29T23:14:04.426474203Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-02-29T23:14:04.426944677Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=469.794µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-02-29T23:14:04.431327155Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-db-migrator | kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-02-29T23:14:04.431721478Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=393.593µs policy-db-migrator | kafka | [2024-02-29 23:14:44,407] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-02-29T23:14:04.440962467Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-02-29T23:14:04.441082378Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=120.171µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) grafana | logger=migrator t=2024-02-29T23:14:04.44711937Z level=info msg="Executing migration" id="create team table" kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | retries = 2147483647 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.44823858Z level=info msg="Migration successfully executed" id="create team table" duration=1.116889ms kafka | [2024-02-29 23:14:44,409] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.455510211Z level=info msg="Executing migration" id="add index team.org_id" kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.457105605Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.585194ms kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.466841939Z level=info msg="Executing migration" id="add unique index team_org_id_name" kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.468291471Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.446622ms kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) grafana | logger=migrator t=2024-02-29T23:14:04.474330193Z level=info msg="Executing migration" id="Add column uid in team" kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.479009033Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.67894ms kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.483797814Z level=info msg="Executing migration" id="Update uid column values in team" kafka | [2024-02-29 23:14:44,410] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.483977215Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=179.181µs kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.489991817Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.490917895Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=925.548µs kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) grafana | logger=migrator t=2024-02-29T23:14:04.498841122Z level=info msg="Executing migration" id="create team member table" kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.500091153Z level=info msg="Migration successfully executed" id="create team member table" duration=1.250451ms kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.506558559Z level=info msg="Executing migration" id="add index team_member.org_id" kafka | [2024-02-29 23:14:44,412] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.508279583Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.708774ms kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.512418009Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.513423787Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.005018ms kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) grafana | logger=migrator t=2024-02-29T23:14:04.519330178Z level=info msg="Executing migration" id="add index team_member.team_id" kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.521095943Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.764955ms kafka | [2024-02-29 23:14:44,413] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.526512349Z level=info msg="Executing migration" id="Add column email to team table" kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.531553302Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.039893ms kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.536229632Z level=info msg="Executing migration" id="Add column external to team_member table" kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.540918833Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.688701ms grafana | logger=migrator t=2024-02-29T23:14:04.545488082Z level=info msg="Executing migration" id="Add column permission to team_member table" kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-02-29T23:14:04.550372094Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.882872ms kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:04.555408367Z level=info msg="Executing migration" id="create dashboard acl table" kafka | [2024-02-29 23:14:44,414] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-02-29T23:14:04.556347495Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=939.069µs kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:04.561982003Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-02-29T23:14:04.563707048Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.734185ms kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-02-29T23:14:04.570559446Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-02-29T23:14:04.572319751Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.759425ms kafka | [2024-02-29 23:14:44,415] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-02-29T23:14:04.579872586Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | security.providers = null grafana | logger=migrator t=2024-02-29T23:14:04.58144516Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.571803ms kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-02-29T23:14:04.593266361Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-02-29T23:14:04.595139627Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.872536ms kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-02-29T23:14:04.601472461Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.603112695Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.640014ms policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.608108528Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-db-migrator | policy-pap | ssl.engine.factory.class = null kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.609138647Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.029199ms policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-pap | ssl.key.password = null kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.61538024Z level=info msg="Executing migration" id="add index dashboard_permission" policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-02-29 23:14:44,416] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.616298378Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=917.698µs policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-02-29 23:14:44,417] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.622293489Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null kafka | [2024-02-29 23:14:44,423] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.622888684Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=590.075µs policy-db-migrator | policy-pap | ssl.keystore.location = null kafka | [2024-02-29 23:14:44,423] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.630270108Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-db-migrator | policy-pap | ssl.keystore.password = null kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.630647421Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=376.243µs policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | ssl.keystore.type = JKS kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.637445949Z level=info msg="Executing migration" id="create tag table" policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.638522448Z level=info msg="Migration successfully executed" id="create tag table" duration=1.076079ms policy-db-migrator | -------------- policy-pap | ssl.provider = null kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.645817131Z level=info msg="Executing migration" id="add index tag.key_value" policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-pap | ssl.secure.random.implementation = null kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.646738948Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=921.337µs policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.653525736Z level=info msg="Executing migration" id="create login attempt table" policy-db-migrator | policy-pap | ssl.truststore.certificates = null kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.654599615Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.068829ms policy-db-migrator | policy-pap | ssl.truststore.location = null kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.65982829Z level=info msg="Executing migration" id="add index login_attempt.username" policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | ssl.truststore.password = null kafka | [2024-02-29 23:14:44,424] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS kafka | [2024-02-29 23:14:44,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-02-29T23:14:04.661329543Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.500543ms kafka | [2024-02-29 23:14:44,425] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | transactional.id = null grafana | logger=migrator t=2024-02-29T23:14:04.670851685Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" kafka | [2024-02-29 23:14:44,427] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-02-29T23:14:04.672207696Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.355751ms kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | grafana | logger=migrator t=2024-02-29T23:14:04.680004013Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | [2024-02-29T23:14:43.718+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-02-29T23:14:04.702350464Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=22.354091ms kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-02-29T23:14:04.70765462Z level=info msg="Executing migration" id="create login_attempt v2" kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-02-29T23:14:04.708562828Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=907.978µs kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483734 grafana | logger=migrator t=2024-02-29T23:14:04.714032974Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-pap | [2024-02-29T23:14:43.734+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f77f3cef-1815-4905-9b5a-40d6087ec71b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-02-29T23:14:04.714947842Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=916.378µs kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-29T23:14:43.735+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44732de5-846e-4467-bb27-5d73124fde9c, alive=false, publisher=null]]: starting grafana | logger=migrator t=2024-02-29T23:14:04.721896082Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" kafka | [2024-02-29 23:14:44,428] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-02-29T23:14:43.735+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-02-29T23:14:04.722565797Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=669.425µs policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-pap | acks = -1 kafka | [2024-02-29 23:14:44,431] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.728611509Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true kafka | [2024-02-29 23:14:44,431] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.72983679Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.225781ms policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | batch.size = 16384 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.797857552Z level=info msg="Executing migration" id="create user auth table" policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.799120973Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.263911ms policy-db-migrator | policy-pap | buffer.memory = 33554432 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.805073234Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.806576637Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.505523ms policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | client.id = producer-2 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.814157461Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-db-migrator | -------------- policy-pap | compression.type = none kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.814269072Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=101.571µs policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | connections.max.idle.ms = 540000 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.821137371Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-db-migrator | -------------- policy-pap | delivery.timeout.ms = 120000 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.828169551Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.03266ms policy-db-migrator | policy-pap | enable.idempotence = true kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.834625206Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-db-migrator | policy-pap | interceptor.classes = [] kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.839997422Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.371896ms policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.845776142Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-db-migrator | -------------- policy-pap | linger.ms = 0 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.851066947Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.290275ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | max.block.ms = 60000 kafka | [2024-02-29 23:14:44,432] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.855781917Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-db-migrator | -------------- policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.861066613Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.284096ms policy-db-migrator | policy-pap | max.request.size = 1048576 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.864916726Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-db-migrator | policy-pap | metadata.max.age.ms = 300000 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.866091726Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.18143ms policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.870437673Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | metric.reporters = [] kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.875468206Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.030273ms policy-pap | metrics.num.samples = 2 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-29T23:14:04.882539617Z level=info msg="Executing migration" id="create server_lock table" policy-pap | metrics.recording.level = INFO kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.883473535Z level=info msg="Migration successfully executed" id="create server_lock table" duration=932.808µs policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.887739151Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.889319815Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.572284ms policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.894111336Z level=info msg="Executing migration" id="create user auth token table" policy-pap | partitioner.class = null kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.895139445Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.029099ms policy-pap | partitioner.ignore.keys = false kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.901010955Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | receive.buffer.bytes = 32768 kafka | [2024-02-29 23:14:44,433] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:04.902049844Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.038859ms policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-02-29 23:14:44,434] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-02-29T23:14:04.910211134Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-02-29 23:14:44,434] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.911218402Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.007088ms policy-pap | request.timeout.ms = 30000 kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.917003672Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | retries = 2147483647 kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.918751046Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.746804ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | retry.backoff.ms = 100 kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.925055141Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null kafka | [2024-02-29 23:14:44,666] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.932447094Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.392413ms policy-db-migrator | policy-pap | sasl.jaas.config = null kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.937540727Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-db-migrator | policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.938642657Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.10121ms policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.943857771Z level=info msg="Executing migration" id="create cache_data table" policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.944702309Z level=info msg="Migration successfully executed" id="create cache_data table" duration=847.578µs policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.951035233Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.952619687Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.563023ms policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null kafka | [2024-02-29 23:14:44,667] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.959139032Z level=info msg="Executing migration" id="create short_url table v1" policy-db-migrator | policy-pap | sasl.login.class = null kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-02-29T23:14:04.960446213Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.302041ms policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.966522806Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-02-29T23:14:04.967997128Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.473792ms policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:04.982614773Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.982759544Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=145.011µs policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | kafka | [2024-02-29 23:14:44,668] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.989367011Z level=info msg="Executing migration" id="delete alert_definition table" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.989548493Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=191.602µs policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.99505483Z level=info msg="Executing migration" id="recreate alert_definition table" policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:04.996653833Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.598453ms policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.002454593Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.004061087Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.606054ms policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.01245381Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-02-29 23:14:44,669] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.013566501Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.112571ms policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.01751219Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.017672301Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=160.881µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.021525819Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.023206926Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.679987ms policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | kafka | [2024-02-29 23:14:44,670] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.033000673Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.034076564Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.074311ms policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.041517917Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-pap | security.protocol = PLAINTEXT policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.043538507Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=2.01121ms policy-pap | security.providers = null policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.048651078Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-pap | send.buffer.bytes = 131072 policy-db-migrator | kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.050006121Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.354543ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.056574137Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.066105552Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.532474ms policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,671] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.071318983Z level=info msg="Executing migration" id="drop alert_definition table" policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.072712147Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.392484ms policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.078771648Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | ssl.engine.factory.class = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.078888729Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=117.641µs kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.082508065Z level=info msg="Executing migration" id="recreate alert_definition_version table" kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0130-pdpstatistics.sql grafana | logger=migrator t=2024-02-29T23:14:05.084121641Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.612206ms kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.09098817Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL grafana | logger=migrator t=2024-02-29T23:14:05.092653456Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.670466ms kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.100817058Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.101873058Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.0619ms kafka | [2024-02-29 23:14:44,672] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.109966939Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql grafana | logger=migrator t=2024-02-29T23:14:05.1100704Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=103.531µs kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.115459484Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num grafana | logger=migrator t=2024-02-29T23:14:05.116965729Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.506555ms kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.123262362Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-02-29 23:14:44,673] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.124629935Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.367414ms kafka | [2024-02-29 23:14:44,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.13007223Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" kafka | [2024-02-29 23:14:44,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) grafana | logger=migrator t=2024-02-29T23:14:05.131600715Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.525755ms kafka | [2024-02-29 23:14:44,676] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.137548384Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.138646675Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.099551ms kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-pap | transactional.id = null policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.148985038Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | > upgrade 0150-pdpstatistics.sql grafana | logger=migrator t=2024-02-29T23:14:05.157965938Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.97869ms kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-pap | policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.163730195Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-pap | [2024-02-29T23:14:43.736+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.164776786Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.046261ms policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.172947307Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.173990288Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.042811ms kafka | [2024-02-29 23:14:44,677] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709248483738 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.180109959Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=44732de5-846e-4467-bb27-5d73124fde9c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-29T23:14:05.218989557Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=38.879988ms kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.223182819Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | [2024-02-29T23:14:43.738+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME grafana | logger=migrator t=2024-02-29T23:14:05.258774304Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=35.586504ms kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | [2024-02-29T23:14:43.740+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.263259698Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-pap | [2024-02-29T23:14:43.742+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.264841224Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.580926ms kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | [2024-02-29T23:14:43.744+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.270715313Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-02-29T23:14:43.744+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-29T23:14:05.272647452Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.930989ms kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-pap | [2024-02-29T23:14:43.746+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.280001135Z level=info msg="Executing migration" id="add current_reason column related to current_state" kafka | [2024-02-29 23:14:44,678] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-02-29T23:14:43.746+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | UPDATE jpapdpstatistics_enginestats a grafana | logger=migrator t=2024-02-29T23:14:05.288886964Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.892439ms kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-pap | [2024-02-29T23:14:43.747+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | JOIN pdpstatistics b grafana | logger=migrator t=2024-02-29T23:14:05.295431129Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-02-29T23:14:43.747+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp grafana | logger=migrator t=2024-02-29T23:14:05.304726352Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=9.287473ms kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | [2024-02-29T23:14:43.748+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | SET a.id = b.id grafana | logger=migrator t=2024-02-29T23:14:05.311374628Z level=info msg="Executing migration" id="create alert_rule table" kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | [2024-02-29T23:14:43.749+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.778 seconds (process running for 12.492) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.313093175Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.722897ms kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-pap | [2024-02-29T23:14:44.289+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: FqFLOU6jRgiQltXq-uD-BA policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.324417499Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-pap | [2024-02-29T23:14:44.291+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: FqFLOU6jRgiQltXq-uD-BA policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.325644131Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.225592ms kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-pap | [2024-02-29T23:14:44.294+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-29T23:14:05.332065895Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" kafka | [2024-02-29 23:14:44,679] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | [2024-02-29T23:14:44.294+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Cluster ID: FqFLOU6jRgiQltXq-uD-BA policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.333515269Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.449884ms kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-pap | [2024-02-29T23:14:44.397+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-02-29T23:14:05.339124875Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | [2024-02-29T23:14:44.398+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: FqFLOU6jRgiQltXq-uD-BA policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.340287397Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.157752ms kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | [2024-02-29T23:14:44.418+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.352005544Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-02-29T23:14:44.438+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.352178875Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=177.891µs kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | [2024-02-29T23:14:44.452+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-02-29T23:14:05.355948273Z level=info msg="Executing migration" id="add column for to alert_rule" kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | [2024-02-29T23:14:44.520+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.362298987Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.350394ms kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | [2024-02-29T23:14:44.531+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) grafana | logger=migrator t=2024-02-29T23:14:05.368432978Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-02-29T23:14:44.636+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.374806461Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.373093ms kafka | [2024-02-29 23:14:44,680] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | [2024-02-29T23:14:44.647+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.378482618Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-02-29T23:14:44.751+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.38474523Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.262192ms kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-02-29T23:14:44.753+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql grafana | logger=migrator t=2024-02-29T23:14:05.389539958Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | [2024-02-29T23:14:44.857+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.390891892Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.351584ms kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-02-29T23:14:44.860+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) grafana | logger=migrator t=2024-02-29T23:14:05.396201115Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | [2024-02-29T23:14:44.963+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.397296746Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.095231ms kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-02-29T23:14:44.964+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-02-29T23:14:45.068+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.400530218Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" kafka | [2024-02-29 23:14:44,681] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-02-29T23:14:45.071+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=migrator t=2024-02-29T23:14:05.406694279Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.163181ms kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-02-29T23:14:45.175+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.411232695Z level=info msg="Executing migration" id="add panel_id column to alert_rule" kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-02-29T23:14:45.187+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-02-29T23:14:05.417119574Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.886119ms kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-02-29T23:14:45.281+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.424681459Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-02-29T23:14:45.295+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-db-migrator | policy-pap | [2024-02-29T23:14:45.387+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.42575644Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.071531ms policy-db-migrator | > upgrade 0220-sequence.sql policy-pap | [2024-02-29T23:14:45.402+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-02-29 23:14:44,682] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.466733877Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:45.504+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-02-29 23:14:44,683] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.47501725Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.281713ms policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-02-29 23:14:44,686] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.478804938Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" policy-db-migrator | -------------- policy-pap | [2024-02-29T23:14:45.515+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] (Re-)joining group kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.484701097Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.890528ms policy-db-migrator | policy-pap | [2024-02-29T23:14:45.516+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.489606336Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" policy-db-migrator | policy-pap | [2024-02-29T23:14:45.526+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.489696166Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=90.36µs policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-pap | [2024-02-29T23:14:45.601+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Request joining group due to: need to re-join with the given member-id: consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.492937419Z level=info msg="Executing migration" id="create alert_rule_version table" policy-pap | [2024-02-29T23:14:45.602+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-02-29T23:14:05.493919009Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=981.15µs policy-pap | [2024-02-29T23:14:45.602+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] (Re-)joining group kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.497342233Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-pap | [2024-02-29T23:14:45.603+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.498488174Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.137061ms policy-pap | [2024-02-29T23:14:45.604+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.504004509Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-pap | [2024-02-29T23:14:45.604+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-02-29 23:14:44,688] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-02-29T23:14:05.505138751Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.133932ms policy-pap | [2024-02-29T23:14:48.627+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Successfully joined group with generation Generation{generationId=1, memberId='consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d', protocol='range'} kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.511693556Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-pap | [2024-02-29T23:14:48.629+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da', protocol='range'} kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) grafana | logger=migrator t=2024-02-29T23:14:05.511798757Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=106.431µs policy-pap | [2024-02-29T23:14:48.637+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.515389513Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-pap | [2024-02-29T23:14:48.637+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Finished assignment for group at generation 1: {consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.522212701Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.831418ms kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da', protocol='range'} policy-db-migrator | kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Successfully synced group in generation Generation{generationId=1, memberId='consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d', protocol='range'} policy-db-migrator | > upgrade 0120-toscatrigger.sql grafana | logger=migrator t=2024-02-29T23:14:05.527356152Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.684+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.541678935Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=14.311053ms kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.685+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger grafana | logger=migrator t=2024-02-29T23:14:05.545725516Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | [2024-02-29 23:14:44,689] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.690+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.552574674Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.858389ms kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.690+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.55719351Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.711+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.565018248Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.826338ms kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.717+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-02-29T23:14:05.568451752Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.736+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.574575883Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.123681ms kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:48.736+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3, groupId=ee5900cb-eee5-431a-a953-12f2e7174bf4] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB grafana | logger=migrator t=2024-02-29T23:14:05.580061388Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:51.186+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' grafana | logger=migrator t=2024-02-29T23:14:05.58022312Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=127.001µs kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:51.186+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.583764805Z level=info msg="Executing migration" id=create_alert_configuration_table kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:14:51.188+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.584617764Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=853.199µs kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.365+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.588094248Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-02-29 23:14:44,690] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [] policy-db-migrator | > upgrade 0140-toscaparameter.sql grafana | logger=migrator t=2024-02-29T23:14:05.594344081Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.249243ms kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.366+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.598190599Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} policy-db-migrator | DROP TABLE IF EXISTS toscaparameter grafana | logger=migrator t=2024-02-29T23:14:05.598356081Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=164.492µs kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.366+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.634600322Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"35bb4aa0-ff48-497e-84df-a13cf4a1f6c0","timestampMs":1709248505318,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.64542828Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.829488ms kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.377+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.648981286Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.472+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting policy-db-migrator | > upgrade 0150-toscaproperty.sql grafana | logger=migrator t=2024-02-29T23:14:05.649824394Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=842.818µs kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.472+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting listener policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.653059656Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.472+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting timer policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints grafana | logger=migrator t=2024-02-29T23:14:05.65942964Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.361514ms kafka | [2024-02-29 23:14:44,691] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.473+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.665355839Z level=info msg="Executing migration" id=create_ngalert_configuration_table kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.476+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting enqueue policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.666275288Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=919.159µs kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.476+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate started policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.670244638Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.476+00:00|INFO|TimerManager|Thread-9] update timer waiting 29997ms Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata grafana | logger=migrator t=2024-02-29T23:14:05.67146695Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.222232ms kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.478+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.687891354Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.696919224Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.02907ms kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.516+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.717429789Z level=info msg="Executing migration" id="create provenance_type table" kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | DROP TABLE IF EXISTS toscaproperty grafana | logger=migrator t=2024-02-29T23:14:05.718875793Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.436874ms kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.516+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.728895503Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-02-29T23:15:05.517+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.730220086Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.317393ms kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.757976373Z level=info msg="Executing migration" id="create alert_image table" policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","timestampMs":1709248505452,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-29 23:14:44,692] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql grafana | logger=migrator t=2024-02-29T23:14:05.761093434Z level=info msg="Migration successfully executed" id="create alert_image table" duration=3.118611ms policy-pap | [2024-02-29T23:15:05.518+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-02-29 23:14:44,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.778353036Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | [2024-02-29T23:15:05.542+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-29 23:14:44,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY grafana | logger=migrator t=2024-02-29T23:14:05.780302556Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.9489ms policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} kafka | [2024-02-29 23:14:44,693] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.808305465Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-pap | [2024-02-29T23:15:05.546+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-29 23:14:44,693] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.808551238Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=245.963µs policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"d4391888-607f-4994-aede-b40e11cf69cc","timestampMs":1709248505528,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup"} kafka | [2024-02-29 23:14:44,701] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-29T23:15:05.546+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) grafana | logger=migrator t=2024-02-29T23:14:05.820065423Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.555+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.822302195Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=2.235642ms kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.843463226Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.8518814Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=8.196832ms kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping enqueue policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql grafana | logger=migrator t=2024-02-29T23:14:05.862291194Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping timer policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.862840109Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-02-29T23:14:05.871258173Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping listener policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.87187845Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=626.027µs kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.578+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopped policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.881254593Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.583+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.882550476Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.300163ms kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"161524c5-f252-4fdd-a0eb-d79ad94ffa8f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"63e32a2d-8cb2-4776-a518-f859a710d4f3","timestampMs":1709248505529,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) grafana | logger=migrator t=2024-02-29T23:14:05.887094391Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.584+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 161524c5-f252-4fdd-a0eb-d79ad94ffa8f policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.893253563Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.161142ms kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.587+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate successful policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.896752378Z level=info msg="Executing migration" id="create library_element table v1" policy-pap | [2024-02-29T23:15:05.587+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 start publishing next request kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:05.897797928Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.05407ms policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-02-29T23:14:05.904224042Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting listener kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:05.905292173Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.063841ms policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting timer kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.908429284Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] kafka | [2024-02-29 23:14:44,702] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.909652066Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.221822ms policy-db-migrator | -------------- policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange starting enqueue kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.912930859Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-db-migrator | policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange started kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.91402759Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.094301ms policy-db-migrator | policy-pap | [2024-02-29T23:15:05.588+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.923943999Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-db-migrator | > upgrade 0100-upgrade.sql policy-pap | [2024-02-29T23:15:05.589+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.925659736Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.711227ms policy-db-migrator | -------------- policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.929652486Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-pap | [2024-02-29T23:15:05.602+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.929686186Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=28.79µs policy-db-migrator | -------------- policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.933418413Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-db-migrator | policy-pap | [2024-02-29T23:15:05.602+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.933493124Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=70.991µs policy-db-migrator | msg policy-pap | [2024-02-29T23:15:05.618+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.942679156Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" policy-db-migrator | upgrade to 1100 completed policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","timestampMs":1709248505453,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.943020409Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=336.033µs policy-db-migrator | policy-pap | [2024-02-29T23:15:05.619+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.949035969Z level=info msg="Executing migration" id="create data_keys table" policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.950201811Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.171472ms policy-db-migrator | -------------- policy-pap | [2024-02-29T23:15:05.620+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6f96306c-4911-4d0f-b1c5-6fbfc3da40bc kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.95718103Z level=info msg="Executing migration" id="create secrets table" policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-pap | [2024-02-29T23:15:05.620+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-02-29 23:14:44,703] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.958380292Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.199462ms policy-db-migrator | -------------- policy-pap | [2024-02-29T23:15:05.623+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:05.965033989Z level=info msg="Executing migration" id="rename data_keys name column to id" policy-db-migrator | policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"6f96306c-4911-4d0f-b1c5-6fbfc3da40bc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"54fea3c5-a806-4c92-8dab-1bfaf7236758","timestampMs":1709248505604,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.010732635Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=45.700126ms policy-db-migrator | policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping grafana | logger=migrator t=2024-02-29T23:14:06.016037358Z level=info msg="Executing migration" id="add name column into data_keys" kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping enqueue grafana | logger=migrator t=2024-02-29T23:14:06.022937424Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.900366ms kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping timer grafana | logger=migrator t=2024-02-29T23:14:06.062792677Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.062986029Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=192.122µs policy-db-migrator | -------------- policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopping listener kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.067394235Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-db-migrator | policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange stopped kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.114437756Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=47.042831ms policy-db-migrator | -------------- policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpStateChange successful kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.118061095Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 start publishing next request kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.16432068Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=46.261555ms policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting grafana | logger=migrator t=2024-02-29T23:14:06.169285451Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting listener grafana | logger=migrator t=2024-02-29T23:14:06.170193928Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=912.487µs policy-db-migrator | kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting timer grafana | logger=migrator t=2024-02-29T23:14:06.174874886Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=68e6fc14-216e-4b7d-9108-21d5680aedaa, expireMs=1709248535624] grafana | logger=migrator t=2024-02-29T23:14:06.175642532Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=769.356µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.624+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate starting enqueue grafana | logger=migrator t=2024-02-29T23:14:06.178646287Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.625+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate started grafana | logger=migrator t=2024-02-29T23:14:06.178809528Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=163.311µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.625+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-02-29T23:14:06.181861063Z level=info msg="Executing migration" id="create permission table" policy-db-migrator | kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-02-29T23:14:06.182479108Z level=info msg="Migration successfully executed" id="create permission table" duration=619.465µs policy-db-migrator | -------------- kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-02-29T23:15:05.639+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) grafana | logger=migrator t=2024-02-29T23:14:06.1913802Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-02-29 23:14:44,704] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.192155166Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=774.656µs kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | [2024-02-29T23:15:05.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.194981189Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-02-29T23:15:05.641+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.195748385Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=767.916µs kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | {"source":"pap-1c2c6b70-e014-4d8f-8465-7398751b54bf","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"68e6fc14-216e-4b7d-9108-21d5680aedaa","timestampMs":1709248505608,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0130-statistics_sequence.sql grafana | logger=migrator t=2024-02-29T23:14:06.198981611Z level=info msg="Executing migration" id="create role table" kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) policy-pap | [2024-02-29T23:15:05.641+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.199559066Z level=info msg="Migration successfully executed" id="create role table" duration=576.845µs kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) policy-pap | [2024-02-29T23:15:05.653+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-02-29T23:14:06.204855679Z level=info msg="Executing migration" id="add column display_name" kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.210404184Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.548295ms kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.213766031Z level=info msg="Executing migration" id="add column group_name" policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"68e6fc14-216e-4b7d-9108-21d5680aedaa","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"92eb2a16-633e-4c6d-ac5e-481f7cfc27d7","timestampMs":1709248505642,"name":"apex-abce66fd-2697-4444-8f18-a77fca000410","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.220852499Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.085398ms policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 68e6fc14-216e-4b7d-9108-21d5680aedaa kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-02-29T23:14:06.224021954Z level=info msg="Executing migration" id="add index role.org_id" policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping kafka | [2024-02-29 23:14:44,743] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.225004742Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=982.288µs policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping enqueue kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.230318925Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.231411864Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.092439ms policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping timer kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-02-29T23:14:06.234851532Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=68e6fc14-216e-4b7d-9108-21d5680aedaa, expireMs=1709248535624] kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.235963531Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.111369ms policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopping listener kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.239671941Z level=info msg="Executing migration" id="create team role table" policy-pap | [2024-02-29T23:15:05.654+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate stopped kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.24072531Z level=info msg="Migration successfully executed" id="create team role table" duration=1.052959ms policy-pap | [2024-02-29T23:15:05.660+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 PdpUpdate successful kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-02-29T23:14:06.245961872Z level=info msg="Executing migration" id="add index team_role.org_id" policy-pap | [2024-02-29T23:15:05.660+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-abce66fd-2697-4444-8f18-a77fca000410 has no more requests kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.247690536Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.711924ms policy-pap | [2024-02-29T23:15:11.854+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-02-29T23:14:06.25184377Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-pap | [2024-02-29T23:15:11.862+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.253628994Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.776804ms policy-pap | [2024-02-29T23:15:12.298+00:00|INFO|SessionData|http-nio-6969-exec-5] unknown group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.257043082Z level=info msg="Executing migration" id="add index team_role.team_id" policy-pap | [2024-02-29T23:15:12.849+00:00|INFO|SessionData|http-nio-6969-exec-5] create cached group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.258146801Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.103069ms policy-pap | [2024-02-29T23:15:12.849+00:00|INFO|SessionData|http-nio-6969-exec-5] creating DB group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | DROP TABLE pdpstatistics grafana | logger=migrator t=2024-02-29T23:14:06.263472814Z level=info msg="Executing migration" id="create user role table" policy-pap | [2024-02-29T23:15:13.414+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.264653234Z level=info msg="Migration successfully executed" id="create user role table" duration=1.18314ms policy-pap | [2024-02-29T23:15:13.704+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.268179882Z level=info msg="Executing migration" id="add index user_role.org_id" policy-pap | [2024-02-29T23:15:13.806+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.269788655Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.603473ms policy-pap | [2024-02-29T23:15:13.807+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-02-29T23:14:06.273220413Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-pap | [2024-02-29T23:15:13.808+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.274339662Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.118689ms policy-pap | [2024-02-29T23:15:13.822+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-29T23:15:13Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-29T23:15:13Z, user=policyadmin)] kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats grafana | logger=migrator t=2024-02-29T23:14:06.2789394Z level=info msg="Executing migration" id="add index user_role.user_id" policy-pap | [2024-02-29T23:15:14.527+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.280035868Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.096038ms policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.284506475Z level=info msg="Executing migration" id="create builtin role table" policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.285301021Z level=info msg="Migration successfully executed" id="create builtin role table" duration=793.996µs policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql grafana | logger=migrator t=2024-02-29T23:14:06.289933059Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-pap | [2024-02-29T23:15:14.529+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.291552032Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.612433ms policy-pap | [2024-02-29T23:15:14.543+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-29T23:15:14Z, user=policyadmin)] kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | DROP TABLE statistics_sequence grafana | logger=migrator t=2024-02-29T23:14:06.296616903Z level=info msg="Executing migration" id="add index builtin_role.name" policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-02-29T23:14:06.298282906Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.665543ms policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-02-29T23:14:06.301450852Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | policyadmin: OK: upgrade (1300) grafana | logger=migrator t=2024-02-29T23:14:06.309097884Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.646642ms kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-db-migrator | name version grafana | logger=migrator t=2024-02-29T23:14:06.312644913Z level=info msg="Executing migration" id="add index builtin_role.org_id" kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup policy-db-migrator | policyadmin 1300 grafana | logger=migrator t=2024-02-29T23:14:06.313707251Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.061898ms kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | [2024-02-29T23:15:14.945+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup policy-db-migrator | ID script operation from_version to_version tag success atTime grafana | logger=migrator t=2024-02-29T23:14:06.31849407Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-02-29T23:15:14.955+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-29T23:15:14Z, user=policyadmin)] policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.319567739Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.069239ms policy-pap | [2024-02-29T23:15:35.474+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=161524c5-f252-4fdd-a0eb-d79ad94ffa8f, expireMs=1709248535473] kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.322795325Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-pap | [2024-02-29T23:15:35.547+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | [2024-02-29T23:15:35.550+00:00|INFO|SessionData|http-nio-6969-exec-10] deleting DB group testGroup kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 policy-pap | [2024-02-29T23:15:35.588+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=6f96306c-4911-4d0f-b1c5-6fbfc3da40bc, expireMs=1709248535588] kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.323890564Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.097409ms kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.327368642Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-02-29 23:14:44,744] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.328724503Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.356461ms kafka | [2024-02-29 23:14:44,745] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.333113049Z level=info msg="Executing migration" id="create seed assignment table" kafka | [2024-02-29 23:14:44,746] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.333896005Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=782.236µs kafka | [2024-02-29 23:14:44,794] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.337997038Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-02-29 23:14:44,805] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.339138558Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.14064ms kafka | [2024-02-29 23:14:44,807] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.342956868Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-02-29 23:14:44,808] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 grafana | logger=migrator t=2024-02-29T23:14:06.350706511Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.749453ms kafka | [2024-02-29 23:14:44,810] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,825] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.355011266Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,826] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.362762929Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.746783ms policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,826] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.366252807Z level=info msg="Executing migration" id="permission attribute migration" policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,826] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.374613755Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.357328ms policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,827] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.378627718Z level=info msg="Executing migration" id="permission identifier migration" policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,834] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.386896995Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.268757ms policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,835] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.391119359Z level=info msg="Executing migration" id="add permission identifier index" policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,835] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.391884885Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=764.966µs policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,835] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.395200322Z level=info msg="Executing migration" id="create query_history table v1" policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,835] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.395896758Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=695.926µs policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:11 kafka | [2024-02-29 23:14:44,842] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.399388226Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 kafka | [2024-02-29 23:14:44,842] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.40118333Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.794084ms policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 kafka | [2024-02-29 23:14:44,842] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.406337602Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 kafka | [2024-02-29 23:14:44,842] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.406404403Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=73.851µs policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 kafka | [2024-02-29 23:14:44,842] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.410323355Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.410381395Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=56.68µs kafka | [2024-02-29 23:14:44,850] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.415414836Z level=info msg="Executing migration" id="teams permissions migration" kafka | [2024-02-29 23:14:44,851] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.416228972Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=814.466µs kafka | [2024-02-29 23:14:44,851] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.420726029Z level=info msg="Executing migration" id="dashboard permissions" kafka | [2024-02-29 23:14:44,851] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.421652996Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=927.927µs kafka | [2024-02-29 23:14:44,852] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.426235124Z level=info msg="Executing migration" id="dashboard permissions uid scopes" kafka | [2024-02-29 23:14:44,865] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.426854239Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=619.175µs kafka | [2024-02-29 23:14:44,866] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.430641649Z level=info msg="Executing migration" id="drop managed folder create actions" kafka | [2024-02-29 23:14:44,866] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.430839391Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=197.882µs kafka | [2024-02-29 23:14:44,866] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.485142311Z level=info msg="Executing migration" id="alerting notification permissions" kafka | [2024-02-29 23:14:44,866] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.485857227Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=715.676µs kafka | [2024-02-29 23:14:44,875] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.492839543Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-02-29 23:14:44,875] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.493797661Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=960.268µs kafka | [2024-02-29 23:14:44,875] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.497916865Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-02-29 23:14:44,875] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.499693529Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.776174ms kafka | [2024-02-29 23:14:44,875] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.505097403Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-02-29 23:14:44,886] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.513795243Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.68053ms kafka | [2024-02-29 23:14:44,887] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.518904075Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-02-29 23:14:44,887] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.518979705Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=75.39µs kafka | [2024-02-29 23:14:44,887] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.5245442Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-02-29 23:14:44,887] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:12 grafana | logger=migrator t=2024-02-29T23:14:06.526076233Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.537993ms kafka | [2024-02-29 23:14:44,898] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.531335846Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-02-29 23:14:44,899] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.533564754Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.228478ms kafka | [2024-02-29 23:14:44,899] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.538959938Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-02-29 23:14:44,899] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.540140938Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.17795ms kafka | [2024-02-29 23:14:44,899] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.545622213Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-02-29 23:14:44,907] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.553883401Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.260718ms kafka | [2024-02-29 23:14:44,908] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.558241617Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" kafka | [2024-02-29 23:14:44,908] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.559343346Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.101689ms kafka | [2024-02-29 23:14:44,908] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.563903323Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" kafka | [2024-02-29 23:14:44,908] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.565064442Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.161229ms kafka | [2024-02-29 23:14:44,914] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.571308873Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" kafka | [2024-02-29 23:14:44,914] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.603667995Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=32.354562ms kafka | [2024-02-29 23:14:44,914] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.607439496Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-02-29 23:14:44,914] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.608392064Z level=info msg="Migration successfully executed" id="create correlation v2" duration=952.337µs policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.613313383Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-02-29 23:14:44,914] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.614512843Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.19932ms kafka | [2024-02-29 23:14:44,920] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.618602376Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-02-29 23:14:44,921] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.619472373Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=869.947µs kafka | [2024-02-29 23:14:44,921] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.624218072Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-02-29 23:14:44,921] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.625088959Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=870.087µs kafka | [2024-02-29 23:14:44,921] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.631536991Z level=info msg="Executing migration" id="copy correlation v1 to v2" kafka | [2024-02-29 23:14:44,931] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.631933824Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=396.863µs kafka | [2024-02-29 23:14:44,932] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:13 grafana | logger=migrator t=2024-02-29T23:14:06.637986573Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" kafka | [2024-02-29 23:14:44,932] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.638727739Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=740.406µs kafka | [2024-02-29 23:14:44,932] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.642053926Z level=info msg="Executing migration" id="add provisioning column" kafka | [2024-02-29 23:14:44,932] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.648114075Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.055669ms kafka | [2024-02-29 23:14:44,941] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.65362601Z level=info msg="Executing migration" id="create entity_events table" kafka | [2024-02-29 23:14:44,941] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.654455157Z level=info msg="Migration successfully executed" id="create entity_events table" duration=828.777µs kafka | [2024-02-29 23:14:44,942] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.657699413Z level=info msg="Executing migration" id="create dashboard public config v1" kafka | [2024-02-29 23:14:44,942] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.658642651Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=942.928µs kafka | [2024-02-29 23:14:44,942] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 kafka | [2024-02-29 23:14:44,952] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.666579105Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.667108139Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" kafka | [2024-02-29 23:14:44,953] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.670544647Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-29 23:14:44,953] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.671006261Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-29 23:14:44,953] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.674387908Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-02-29 23:14:44,953] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.675234615Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=846.427µs kafka | [2024-02-29 23:14:44,968] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.685022895Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-02-29 23:14:44,969] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.687092991Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=2.073926ms kafka | [2024-02-29 23:14:44,969] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.697473586Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 kafka | [2024-02-29 23:14:44,969] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.699362441Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.888735ms policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 grafana | logger=migrator t=2024-02-29T23:14:06.703506245Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-02-29 23:14:44,970] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:14 kafka | [2024-02-29 23:14:44,977] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.70539654Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.894436ms policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 kafka | [2024-02-29 23:14:44,978] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.709268371Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 kafka | [2024-02-29 23:14:44,978] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.71031549Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.046899ms policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 kafka | [2024-02-29 23:14:44,978] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.714649185Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-02-29 23:14:44,978] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.716238508Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.588813ms policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 kafka | [2024-02-29 23:14:44,985] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.722497628Z level=info msg="Executing migration" id="Drop public config table" policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 kafka | [2024-02-29 23:14:44,986] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.724495695Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.998557ms policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 kafka | [2024-02-29 23:14:44,986] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.731692423Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2902242314100800u 1 2024-02-29 23:14:15 kafka | [2024-02-29 23:14:44,988] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.733508928Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.819555ms policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.738559719Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-02-29 23:14:44,988] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.740318113Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.759894ms kafka | [2024-02-29 23:14:45,071] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.743762011Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-02-29 23:14:45,072] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.744577597Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=815.236µs kafka | [2024-02-29 23:14:45,072] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.75101314Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-02-29 23:14:45,073] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.751835376Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=822.006µs kafka | [2024-02-29 23:14:45,073] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.756252832Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-02-29 23:14:45,082] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.786132634Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=29.879212ms kafka | [2024-02-29 23:14:45,082] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.792422245Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-02-29 23:14:45,082] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:15 grafana | logger=migrator t=2024-02-29T23:14:06.799553953Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.131598ms kafka | [2024-02-29 23:14:45,082] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:16 grafana | logger=migrator t=2024-02-29T23:14:06.806046676Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-02-29 23:14:45,082] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:16 grafana | logger=migrator t=2024-02-29T23:14:06.815079139Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=9.035543ms kafka | [2024-02-29 23:14:45,089] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2902242314100900u 1 2024-02-29 23:14:16 grafana | logger=migrator t=2024-02-29T23:14:06.818787899Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-02-29 23:14:45,090] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.818982791Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=195.442µs policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,090] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.829224434Z level=info msg="Executing migration" id="add share column" policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,090] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.83740259Z level=info msg="Migration successfully executed" id="add share column" duration=8.176106ms policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,090] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,098] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.841451153Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,099] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.841692575Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=239.592µs policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,099] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.845168203Z level=info msg="Executing migration" id="create file table" policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,099] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.845811838Z level=info msg="Migration successfully executed" id="create file table" duration=643.095µs policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,099] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.849177605Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2902242314101000u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,107] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.850291114Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.112519ms policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2902242314101100u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,107] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.854784511Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,107] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.85587486Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.089539ms policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,107] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.861446275Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,107] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.862743255Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.3025ms policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2902242314101200u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,115] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.86825456Z level=info msg="Executing migration" id="file table idx: path key" policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2902242314101300u 1 2024-02-29 23:14:16 kafka | [2024-02-29 23:14:45,115] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.869685972Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.430672ms policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2902242314101300u 1 2024-02-29 23:14:17 kafka | [2024-02-29 23:14:45,116] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.876485627Z level=info msg="Executing migration" id="set path collation in file table" policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2902242314101300u 1 2024-02-29 23:14:17 kafka | [2024-02-29 23:14:45,116] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.876533527Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=48µs policy-db-migrator | policyadmin: OK @ 1300 kafka | [2024-02-29 23:14:45,116] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.92626292Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-02-29 23:14:45,125] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.926374811Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=121.691µs kafka | [2024-02-29 23:14:45,126] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.931037189Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-02-29 23:14:45,126] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.931988967Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=952.118µs kafka | [2024-02-29 23:14:45,126] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.940479255Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-02-29T23:14:06.940740058Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=260.943µs kafka | [2024-02-29 23:14:45,126] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.953134628Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-02-29 23:14:45,137] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.954357308Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.22259ms kafka | [2024-02-29 23:14:45,137] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.959938543Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-02-29 23:14:45,137] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.969575611Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.637708ms kafka | [2024-02-29 23:14:45,137] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.973224521Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-02-29 23:14:45,137] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.973383772Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=158.181µs kafka | [2024-02-29 23:14:45,150] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.97678679Z level=info msg="Executing migration" id="Add index for uid in playlist" kafka | [2024-02-29 23:14:45,151] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.977793768Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.006558ms kafka | [2024-02-29 23:14:45,151] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.983190692Z level=info msg="Executing migration" id="update group index for alert rules" kafka | [2024-02-29 23:14:45,151] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.983608735Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=422.073µs kafka | [2024-02-29 23:14:45,151] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:06.987614558Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-02-29 23:14:45,157] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:06.987838899Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=221.311µs kafka | [2024-02-29 23:14:45,159] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:06.993998279Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-02-29 23:14:45,159] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.994764375Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=771.216µs kafka | [2024-02-29 23:14:45,159] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:06.998335394Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-02-29 23:14:45,159] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.005540543Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=7.203759ms kafka | [2024-02-29 23:14:45,167] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.009462236Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-02-29 23:14:45,167] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.017444353Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.986727ms kafka | [2024-02-29 23:14:45,168] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.021974051Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-02-29 23:14:45,169] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.022942099Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=969.438µs kafka | [2024-02-29 23:14:45,169] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.026753001Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-02-29 23:14:45,182] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.134335783Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=107.584562ms kafka | [2024-02-29 23:14:45,183] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.138148715Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-02-29 23:14:45,183] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.139054693Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=906.658µs kafka | [2024-02-29 23:14:45,183] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.143783202Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-02-29 23:14:45,183] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.14463386Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=849.318µs kafka | [2024-02-29 23:14:45,196] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.148092069Z level=info msg="Executing migration" id="add primary key to seed_assigment" kafka | [2024-02-29 23:14:45,198] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.184491914Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=36.399255ms kafka | [2024-02-29 23:14:45,198] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.189789958Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-02-29 23:14:45,198] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.19000521Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=216.592µs kafka | [2024-02-29 23:14:45,198] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.19353829Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-02-29 23:14:45,207] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,207] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.193719892Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=181.802µs kafka | [2024-02-29 23:14:45,207] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.197563534Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-02-29 23:14:45,208] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.197796426Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=230.752µs grafana | logger=migrator t=2024-02-29T23:14:07.201800359Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-02-29T23:14:07.202639236Z level=info msg="Migration successfully executed" id="create folder table" duration=838.627µs kafka | [2024-02-29 23:14:45,208] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.207916161Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-02-29 23:14:45,215] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.209868387Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.946456ms kafka | [2024-02-29 23:14:45,216] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.214075043Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-02-29 23:14:45,216] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.215251222Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.1757ms kafka | [2024-02-29 23:14:45,216] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.221476814Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-02-29 23:14:45,216] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.221525595Z level=info msg="Migration successfully executed" id="Update folder title length" duration=49.921µs kafka | [2024-02-29 23:14:45,224] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.2257169Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-02-29 23:14:45,225] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.227609226Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.891476ms kafka | [2024-02-29 23:14:45,225] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.231554149Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-02-29 23:14:45,225] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.232623318Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.070889ms kafka | [2024-02-29 23:14:45,226] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.237822352Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-02-29 23:14:45,234] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.240459224Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.627312ms kafka | [2024-02-29 23:14:45,234] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.24479022Z level=info msg="Executing migration" id="Sync dashboard and folder table" kafka | [2024-02-29 23:14:45,234] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.245492946Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=711.096µs kafka | [2024-02-29 23:14:45,235] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.249194547Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" kafka | [2024-02-29 23:14:45,235] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(j4DaYO3UQ1iVwjuKp7Abhw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.249452909Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=258.382µs kafka | [2024-02-29 23:14:45,242] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.25312204Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-02-29 23:14:45,242] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.254085478Z level=info msg="Migration successfully executed" id="create anon_device table" duration=961.718µs kafka | [2024-02-29 23:14:45,242] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.259358942Z level=info msg="Executing migration" id="add unique index anon_device.device_id" kafka | [2024-02-29 23:14:45,242] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.261245208Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.893036ms kafka | [2024-02-29 23:14:45,243] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.267160828Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-02-29 23:14:45,249] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.268542679Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.374671ms kafka | [2024-02-29 23:14:45,249] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.285813124Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-02-29 23:14:45,249] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.287030055Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.224601ms kafka | [2024-02-29 23:14:45,250] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.290584734Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-02-29 23:14:45,250] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.291858025Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.273181ms kafka | [2024-02-29 23:14:45,257] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.330284558Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" kafka | [2024-02-29 23:14:45,262] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.332181004Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.896816ms kafka | [2024-02-29 23:14:45,262] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.34012563Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" kafka | [2024-02-29 23:14:45,262] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.340512243Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=386.733µs kafka | [2024-02-29 23:14:45,262] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-02-29T23:14:07.344280165Z level=info msg="Executing migration" id="Add folder_uid for dashboard" kafka | [2024-02-29 23:14:45,271] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-02-29T23:14:07.359858716Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=15.579341ms kafka | [2024-02-29 23:14:45,272] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-02-29T23:14:07.363194964Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-02-29 23:14:45,272] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.363808059Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=616.255µs kafka | [2024-02-29 23:14:45,273] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-02-29T23:14:07.368092615Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-02-29T23:14:07.369030943Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=938.218µs grafana | logger=migrator t=2024-02-29T23:14:07.375102504Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-02-29T23:14:07.376829128Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.725484ms grafana | logger=migrator t=2024-02-29T23:14:07.382715688Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-02-29T23:14:07.383593775Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=878.828µs grafana | logger=migrator t=2024-02-29T23:14:07.387963502Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-02-29T23:14:07.388255784Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=292.422µs grafana | logger=migrator t=2024-02-29T23:14:07.394378696Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.767963174s grafana | logger=sqlstore t=2024-02-29T23:14:07.407765778Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-02-29T23:14:07.408286182Z level=info msg="Created default organization" grafana | logger=secrets t=2024-02-29T23:14:07.41272415Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.store t=2024-02-29T23:14:07.428928345Z level=info msg="Loading plugins..." grafana | logger=local.finder t=2024-02-29T23:14:07.467088936Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-02-29T23:14:07.467166116Z level=info msg="Plugins loaded" count=55 duration=38.239841ms grafana | logger=query_data t=2024-02-29T23:14:07.469914939Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-02-29T23:14:07.474202906Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.migration t=2024-02-29T23:14:07.481160854Z level=info msg=Starting grafana | logger=ngalert.migration orgID=1 t=2024-02-29T23:14:07.482100072Z level=info msg="Migrating alerts for organisation" grafana | logger=ngalert.migration orgID=1 t=2024-02-29T23:14:07.482873438Z level=info msg="Alerts found to migrate" alerts=0 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-29T23:14:07.484809024Z level=info msg="Completed legacy migration" grafana | logger=infra.usagestats.collector t=2024-02-29T23:14:07.515487072Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-02-29T23:14:07.517522329Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-02-29T23:14:07.53072671Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-02-29T23:14:07.53074685Z level=info msg="finished to provision alerting" grafana | logger=grafanaStorageLogger t=2024-02-29T23:14:07.531205264Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2024-02-29T23:14:07.533208311Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2024-02-29T23:14:07.534524352Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2024-02-29T23:14:07.545510564Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=grafana-apiserver t=2024-02-29T23:14:07.547684332Z level=info msg="Authentication is disabled" grafana | logger=grafana-apiserver t=2024-02-29T23:14:07.556521176Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=plugins.update.checker t=2024-02-29T23:14:07.646214929Z level=info msg="Update check succeeded" duration=115.008265ms grafana | logger=ngalert.state.manager t=2024-02-29T23:14:07.664322091Z level=info msg="State cache has been initialized" states=0 duration=131.10945ms grafana | logger=ngalert.scheduler t=2024-02-29T23:14:07.664434882Z level=info msg="Starting scheduler" tickInterval=10s grafana | logger=ticker t=2024-02-29T23:14:07.664806355Z level=info msg=starting first_tick=2024-02-29T23:14:10Z grafana | logger=grafana.update.checker t=2024-02-29T23:14:07.740094687Z level=info msg="Update check succeeded" duration=208.605311ms grafana | logger=sqlstore.transactions t=2024-02-29T23:14:07.767387336Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-02-29T23:14:07.778271177Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" grafana | logger=infra.usagestats t=2024-02-29T23:15:58.543605948Z level=info msg="Usage stats are ready to report" kafka | [2024-02-29 23:14:45,273] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,285] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,288] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,289] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,289] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,289] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,302] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,303] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,303] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,303] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,304] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,313] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,314] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,314] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,314] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,314] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,324] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,324] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,325] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,325] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,325] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,332] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,333] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,333] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,333] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,333] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,340] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,340] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,340] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,340] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,340] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,349] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,350] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,350] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,350] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,351] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,358] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,358] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,358] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,358] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,359] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,366] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,366] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,366] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,366] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,366] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,374] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,379] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,379] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,380] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,380] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,390] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,391] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,393] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,393] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,393] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,403] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-02-29 23:14:45,404] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-02-29 23:14:45,404] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,404] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-02-29 23:14:45,404] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Fk26_aqxRF-nlCfGN2xAXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-02-29 23:14:45,412] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-02-29 23:14:45,413] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-02-29 23:14:45,414] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-02-29 23:14:45,415] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-02-29 23:14:45,415] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-02-29 23:14:45,415] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-02-29 23:14:45,422] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,427] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,429] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,429] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,430] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,430] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,431] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,431] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,432] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,433] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,433] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,434] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,440] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 12 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,448] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,449] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,450] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-02-29 23:14:45,443] INFO [Broker id=1] Finished LeaderAndIsr request in 744ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-02-29 23:14:45,468] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Fk26_aqxRF-nlCfGN2xAXQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=j4DaYO3UQ1iVwjuKp7Abhw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,475] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,476] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,477] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-02-29 23:14:45,478] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-02-29 23:14:45,593] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,593] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group ee5900cb-eee5-431a-a953-12f2e7174bf4 in Empty state. Created a new member id consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,613] INFO [GroupCoordinator 1]: Preparing to rebalance group ee5900cb-eee5-431a-a953-12f2e7174bf4 in state PreparingRebalance with old generation 0 (__consumer_offsets-17) (reason: Adding new member consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,619] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,752] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f in Empty state. Created a new member id consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:45,756] INFO [GroupCoordinator 1]: Preparing to rebalance group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f in state PreparingRebalance with old generation 0 (__consumer_offsets-43) (reason: Adding new member consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:48,623] INFO [GroupCoordinator 1]: Stabilized group ee5900cb-eee5-431a-a953-12f2e7174bf4 generation 1 (__consumer_offsets-17) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:48,627] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:48,659] INFO [GroupCoordinator 1]: Assignment received from leader consumer-ee5900cb-eee5-431a-a953-12f2e7174bf4-3-0faf5e32-79bd-4f41-9620-d327446b083d for group ee5900cb-eee5-431a-a953-12f2e7174bf4 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:48,659] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-4c29484e-1660-4493-a89f-f77a0dd5a7da for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:48,758] INFO [GroupCoordinator 1]: Stabilized group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f generation 1 (__consumer_offsets-43) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-02-29 23:14:48,776] INFO [GroupCoordinator 1]: Assignment received from leader consumer-9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f-2-f388779b-8eb5-451e-807a-78ed4a4d4025 for group 9bd64ecd-3f0e-4f40-b194-b2aaf1302d2f for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-api ... Stopping kafka ... Stopping grafana ... Stopping simulator ... Stopping mariadb ... Stopping compose_zookeeper_1 ... Stopping prometheus ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-api ... Removing kafka ... Removing policy-db-migrator ... Removing grafana ... Removing simulator ... Removing mariadb ... Removing compose_zookeeper_1 ... Removing prometheus ... Removing grafana ... done Removing kafka ... done Removing policy-api ... done Removing prometheus ... done Removing simulator ... done Removing policy-db-migrator ... done Removing policy-apex-pdp ... done Removing policy-pap ... done Removing mariadb ... done Removing compose_zookeeper_1 ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.yQgLqrqYzc ]] + rsync -av /tmp/tmp.yQgLqrqYzc/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 919,002 bytes received 95 bytes 1,838,194.00 bytes/sec total size is 918,461 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2079 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13590451755714185785.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12809220214754524161.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7366362530703612521.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12966616653946425579.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config7726622470440274495tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9676243998789785870.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17631643642979732378.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17945751632905612697.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins889751829826837534.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins6959643072135076841.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ohSB from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ohSB/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1595 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-9933 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 832 25127 0 6207 30879 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:f4:f7:b5 brd ff:ff:ff:ff:ff:ff inet 10.30.107.235/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85943sec preferred_lft 85943sec inet6 fe80::f816:3eff:fef4:f7b5/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:c9:b1:98:ab brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-9933) 02/29/24 _x86_64_ (8 CPU) 23:10:25 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:01 115.81 36.14 79.67 1687.84 25902.43 23:13:01 127.73 23.15 104.58 2767.54 31635.26 23:14:01 260.04 2.68 257.36 417.40 149330.71 23:15:01 287.37 9.88 277.49 402.53 28659.91 23:16:01 18.71 0.02 18.70 0.13 19628.10 23:17:01 28.27 0.07 28.21 10.53 21226.26 Average: 139.65 11.99 127.66 880.99 46062.53 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 30101880 31693852 2837340 8.61 70172 1832432 1433108 4.22 880844 1668688 157908 23:13:01 28594204 31664576 4345016 13.19 104804 3229348 1596872 4.70 991024 2967364 1217884 23:14:01 25582476 31472664 7356744 22.33 141536 5874976 4511916 13.28 1205708 5603564 500 23:15:01 23450660 29492500 9488560 28.81 156340 5993400 8906320 26.20 3378168 5506476 1396 23:16:01 23463856 29506528 9475364 28.77 156612 5993692 8867760 26.09 3365796 5504052 236 23:17:01 23716248 29784688 9222972 28.00 156948 6021868 7323488 21.55 3110336 5518468 336 Average: 25818221 30602468 7120999 21.62 131069 4824286 5439911 16.01 2155313 4461435 229710 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 63.06 42.45 872.93 9.36 0.00 0.00 0.00 0.00 23:12:01 lo 1.67 1.67 0.18 0.18 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 222.18 147.64 6440.86 15.16 0.00 0.00 0.00 0.00 23:13:01 lo 6.93 6.93 0.65 0.65 0.00 0.00 0.00 0.00 23:13:01 br-f795af09d20d 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 veth8264fa4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 veth35e9170 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 veth2a66cac 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:15:01 veth8264fa4 5.08 6.47 0.81 0.91 0.00 0.00 0.00 0.00 23:15:01 veth35e9170 0.00 0.42 0.00 0.02 0.00 0.00 0.00 0.00 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:15:01 veth175875d 0.52 0.90 0.06 0.31 0.00 0.00 0.00 0.00 23:16:01 veth8264fa4 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:16:01 veth35e9170 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:16:01 veth175875d 0.23 0.15 0.02 0.01 0.00 0.00 0.00 0.00 23:17:01 veth8264fa4 0.17 0.50 0.01 0.04 0.00 0.00 0.00 0.00 23:17:01 veth35e9170 0.00 0.15 0.00 0.01 0.00 0.00 0.00 0.00 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 ens3 1611.48 897.32 33899.60 132.11 0.00 0.00 0.00 0.00 Average: veth8264fa4 0.90 1.22 0.14 0.16 0.00 0.00 0.00 0.00 Average: veth35e9170 0.00 0.10 0.00 0.01 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 215.83 114.86 5523.35 13.08 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-9933) 02/29/24 _x86_64_ (8 CPU) 23:10:25 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 10.16 0.00 0.82 2.31 0.04 86.67 23:12:01 0 11.08 0.00 0.93 0.43 0.02 87.54 23:12:01 1 0.97 0.00 0.50 13.46 0.03 85.03 23:12:01 2 1.08 0.00 0.35 0.03 0.02 98.52 23:12:01 3 2.54 0.00 0.57 1.45 0.02 95.43 23:12:01 4 26.46 0.00 1.49 0.87 0.05 71.14 23:12:01 5 19.75 0.00 1.32 0.65 0.03 78.25 23:12:01 6 16.09 0.00 0.98 0.68 0.05 82.19 23:12:01 7 3.37 0.00 0.40 0.97 0.07 95.19 23:13:01 all 11.30 0.00 2.08 2.56 0.04 84.03 23:13:01 0 5.79 0.00 1.94 0.22 0.03 92.02 23:13:01 1 4.42 0.00 1.56 8.65 0.02 85.36 23:13:01 2 14.64 0.00 1.74 1.21 0.03 82.38 23:13:01 3 4.25 0.00 1.65 3.42 0.03 90.64 23:13:01 4 19.28 0.00 2.77 1.53 0.03 76.40 23:13:01 5 10.46 0.00 2.14 0.42 0.03 86.94 23:13:01 6 24.64 0.00 2.67 0.87 0.05 71.77 23:13:01 7 6.88 0.00 2.16 4.16 0.07 86.73 23:14:01 all 10.70 0.00 5.18 8.25 0.06 75.81 23:14:01 0 9.80 0.00 5.73 0.71 0.07 83.69 23:14:01 1 12.90 0.00 5.70 18.01 0.09 63.30 23:14:01 2 13.39 0.00 4.09 0.27 0.05 82.19 23:14:01 3 10.49 0.00 4.79 11.05 0.07 73.61 23:14:01 4 8.74 0.00 5.69 10.99 0.07 74.52 23:14:01 5 11.84 0.00 5.43 14.02 0.07 68.65 23:14:01 6 9.21 0.00 5.22 3.75 0.05 81.77 23:14:01 7 9.23 0.00 4.77 7.41 0.05 78.54 23:15:01 all 29.86 0.00 4.01 2.24 0.09 63.80 23:15:01 0 26.73 0.00 3.91 1.70 0.08 67.58 23:15:01 1 29.37 0.00 4.13 1.55 0.10 64.85 23:15:01 2 30.43 0.00 4.13 6.48 0.08 58.87 23:15:01 3 29.34 0.00 3.76 0.69 0.08 66.14 23:15:01 4 30.79 0.00 4.09 2.04 0.10 62.98 23:15:01 5 40.86 0.00 4.93 0.59 0.10 53.52 23:15:01 6 29.23 0.00 4.15 1.92 0.07 64.64 23:15:01 7 22.20 0.00 3.05 2.92 0.08 71.74 23:16:01 all 4.67 0.00 0.44 0.95 0.06 93.89 23:16:01 0 3.99 0.00 0.48 0.00 0.03 95.50 23:16:01 1 5.06 0.00 0.40 0.03 0.05 94.45 23:16:01 2 5.05 0.00 0.40 7.44 0.05 87.06 23:16:01 3 3.85 0.00 0.44 0.03 0.05 95.63 23:16:01 4 4.26 0.00 0.45 0.00 0.07 95.23 23:16:01 5 6.06 0.00 0.65 0.00 0.07 93.22 23:16:01 6 5.95 0.00 0.45 0.05 0.07 93.48 23:16:01 7 3.08 0.00 0.22 0.02 0.07 96.62 23:17:01 all 1.31 0.00 0.37 1.21 0.05 97.06 23:17:01 0 1.19 0.00 0.37 0.00 0.05 98.40 23:17:01 1 1.22 0.00 0.40 0.17 0.05 98.16 23:17:01 2 1.97 0.00 0.45 8.98 0.07 88.53 23:17:01 3 0.89 0.00 0.37 0.05 0.05 98.64 23:17:01 4 1.52 0.00 0.30 0.37 0.05 97.76 23:17:01 5 1.00 0.00 0.47 0.02 0.08 98.43 23:17:01 6 0.87 0.00 0.22 0.02 0.02 98.88 23:17:01 7 1.85 0.00 0.38 0.08 0.07 97.61 Average: all 11.32 0.00 2.14 2.91 0.06 83.58 Average: 0 9.75 0.00 2.22 0.51 0.05 87.48 Average: 1 8.95 0.00 2.10 6.95 0.06 81.95 Average: 2 11.07 0.00 1.85 4.07 0.05 82.95 Average: 3 8.55 0.00 1.92 2.76 0.05 86.72 Average: 4 15.18 0.00 2.46 2.61 0.06 79.69 Average: 5 14.98 0.00 2.48 2.58 0.06 79.90 Average: 6 14.32 0.00 2.27 1.21 0.05 82.15 Average: 7 7.76 0.00 1.82 2.58 0.07 87.77