Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-13424 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-dHwYGvWDeR5I/agent.2076 SSH_AGENT_PID=2078 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9201560483265832117.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9201560483265832117.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 9e33a52d0cf03c0458911330fb72037d01b07a4a (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 9e33a52d0cf03c0458911330fb72037d01b07a4a # timeout=30 Commit message: "Add Prometheus config for http and k8s participants in csit" > git rev-list --no-walk 9e33a52d0cf03c0458911330fb72037d01b07a4a # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10389856696790846078.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-48nb lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH Generating Requirements File Python 3.10.6 pip 24.0 from /tmp/venv-48nb/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.3 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.64 botocore==1.34.64 bs4==0.0.2 cachetools==5.3.3 certifi==2024.2.2 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.6.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.6.1 docker==4.2.2 dogpile.cache==1.3.2 email_validator==2.1.1 filelock==3.13.1 future==1.0.0 gitdb==4.0.11 GitPython==3.1.42 google-auth==2.28.2 httplib2==0.22.0 identify==2.5.35 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.6.0 kubernetes==29.0.0 lftools==0.37.10 lxml==5.1.0 MarkupSafe==2.1.5 msgpack==1.0.8 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.2.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==3.0.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.1 oslo.config==9.4.0 oslo.context==5.5.0 oslo.i18n==6.3.0 oslo.log==5.5.0 oslo.serialization==5.4.0 oslo.utils==7.1.0 packaging==24.0 pbr==6.0.0 platformdirs==4.2.0 prettytable==3.10.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.2.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.5.0 python-dateutil==2.9.0.post0 python-heatclient==3.5.0 python-jenkins==1.8.2 python-keystoneclient==5.4.0 python-magnumclient==4.4.0 python-novaclient==18.6.0 python-openstackclient==6.5.0 python-swiftclient==4.5.0 PyYAML==6.0.1 referencing==0.33.0 requests==2.31.0 requests-oauthlib==1.4.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.18.0 rsa==4.9 ruamel.yaml==0.18.6 ruamel.yaml.clib==0.2.8 s3transfer==0.10.1 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.2.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.4 tqdm==4.66.2 typing_extensions==4.10.0 tzdata==2024.1 urllib3==1.26.18 virtualenv==20.25.1 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins17016876437216414179.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins1463812614679641306.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.Yl1wrOOp9K ++ echo ROBOT_VENV=/tmp/tmp.Yl1wrOOp9K +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.Yl1wrOOp9K ++ source /tmp/tmp.Yl1wrOOp9K/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.Yl1wrOOp9K +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.Yl1wrOOp9K/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.Yl1wrOOp9K) ' '!=' x ']' +++ PS1='(tmp.Yl1wrOOp9K) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.Yl1wrOOp9K/src/onap ++ rm -rf /tmp/tmp.Yl1wrOOp9K/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2024.2.2 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==1.0.0 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.2 python-dateutil==2.9.0.post0 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ sed 's/./& /g' ++ echo ehuxB + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.Yl1wrOOp9K/bin/activate + '[' -z /tmp/tmp.Yl1wrOOp9K/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.Yl1wrOOp9K/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.Yl1wrOOp9K ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.Yl1wrOOp9K/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.Yl1wrOOp9K) ' ++ '[' 'x(tmp.Yl1wrOOp9K) ' '!=' x ']' ++ PS1='(tmp.Yl1wrOOp9K) (tmp.Yl1wrOOp9K) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ sed 's/./& /g' ++ echo hxB + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.Xn1lruRwEW + cd /tmp/tmp.Xn1lruRwEW + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:f9811e4e687ffecf1a43adb9b64096c50bc0d7a782f8608530f478b6542de7d5 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:37b4f26d0170f90ca974aea8100c4fea8bf2a2b3b5cdb1e4e7c97492d3a4ad6a Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:fdc9aa26830be0af882248f5f576f0e9466b8e17ff432e8618d01432efa85803 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:5e7bdea16830f0dd3e16df519f0efbee38922192c2a79297bcac6699fa44e067 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:3f9880e060c3465862043c69561fa1d43ab448175d1adf3efd53d751d3b9947d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT Creating prometheus ... Creating compose_zookeeper_1 ... Creating simulator ... Creating mariadb ... Creating mariadb ... done Creating policy-db-migrator ... Creating simulator ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds policy-api Up 11 seconds grafana Up 14 seconds kafka Up 15 seconds mariadb Up 18 seconds simulator Up 17 seconds compose_zookeeper_1 Up 16 seconds prometheus Up 14 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds policy-api Up 17 seconds grafana Up 19 seconds kafka Up 20 seconds mariadb Up 23 seconds simulator Up 22 seconds compose_zookeeper_1 Up 21 seconds prometheus Up 19 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds policy-api Up 22 seconds grafana Up 24 seconds kafka Up 25 seconds mariadb Up 28 seconds simulator Up 28 seconds compose_zookeeper_1 Up 26 seconds prometheus Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds policy-api Up 27 seconds grafana Up 29 seconds kafka Up 30 seconds mariadb Up 34 seconds simulator Up 33 seconds compose_zookeeper_1 Up 31 seconds prometheus Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 32 seconds grafana Up 34 seconds kafka Up 35 seconds mariadb Up 39 seconds simulator Up 38 seconds compose_zookeeper_1 Up 36 seconds prometheus Up 35 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:14:23 up 4 min, 0 users, load average: 2.55, 1.08, 0.43 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 14.8 us, 3.0 sy, 0.0 ni, 79.0 id, 3.1 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.4G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds policy-api Up 32 seconds grafana Up 34 seconds kafka Up 36 seconds mariadb Up 39 seconds simulator Up 38 seconds compose_zookeeper_1 Up 37 seconds prometheus Up 35 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 779c81be093f policy-apex-pdp 1.49% 194.2MiB / 31.41GiB 0.60% 6.98kB / 6.77kB 0B / 0B 48 f0db664a443a policy-pap 3.14% 563.2MiB / 31.41GiB 1.75% 28.3kB / 30.2kB 0B / 153MB 61 1c2bd153d208 policy-api 0.14% 497.5MiB / 31.41GiB 1.55% 999kB / 710kB 0B / 0B 54 f00e99419f43 grafana 0.29% 53.1MiB / 31.41GiB 0.17% 18.5kB / 3.38kB 0B / 24.9MB 15 4ceeac07ec8e kafka 0.61% 376.2MiB / 31.41GiB 1.17% 70.6kB / 72.7kB 0B / 475kB 83 6036c5abe3ed mariadb 0.02% 102.1MiB / 31.41GiB 0.32% 995kB / 1.19MB 10.9MB / 71.6MB 37 a0033526e784 simulator 0.09% 123.3MiB / 31.41GiB 0.38% 1.36kB / 0B 225kB / 0B 76 9d802bffc7ba compose_zookeeper_1 0.17% 98.43MiB / 31.41GiB 0.31% 56.1kB / 50.4kB 0B / 356kB 61 9b867a9bea16 prometheus 0.00% 18.23MiB / 31.41GiB 0.06% 1.37kB / 158B 0B / 0B 12 + echo + cd /tmp/tmp.Xn1lruRwEW + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | pdpTypeC != pdpTypeA ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | FAIL | 22 tests, 21 passed, 1 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | FAIL | 30 tests, 29 passed, 1 failed ============================================================================== Output: /tmp/tmp.Xn1lruRwEW/output.xml Log: /tmp/tmp.Xn1lruRwEW/log.html Report: /tmp/tmp.Xn1lruRwEW/report.html + RESULT=1 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 1' RESULT: 1 + exit 1 + on_exit + rc=1 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:16:13 up 5 min, 0 users, load average: 0.68, 0.91, 0.44 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.6 us, 2.2 sy, 0.0 ni, 83.6 id, 2.4 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.4G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 779c81be093f policy-apex-pdp 1.30% 186.1MiB / 31.41GiB 0.58% 56.3kB / 91.1kB 0B / 0B 52 f0db664a443a policy-pap 0.55% 537.8MiB / 31.41GiB 1.67% 2.33MB / 807kB 0B / 153MB 65 1c2bd153d208 policy-api 0.11% 561.6MiB / 31.41GiB 1.75% 2.49MB / 1.26MB 0B / 0B 57 f00e99419f43 grafana 0.03% 60.84MiB / 31.41GiB 0.19% 19.3kB / 4.33kB 0B / 24.9MB 15 4ceeac07ec8e kafka 9.94% 390.2MiB / 31.41GiB 1.21% 241kB / 215kB 0B / 573kB 85 6036c5abe3ed mariadb 0.01% 103.4MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 10.9MB / 71.9MB 28 a0033526e784 simulator 0.22% 123.5MiB / 31.41GiB 0.38% 1.67kB / 0B 225kB / 0B 78 9d802bffc7ba compose_zookeeper_1 0.07% 99.74MiB / 31.41GiB 0.31% 59kB / 52kB 0B / 356kB 61 9b867a9bea16 prometheus 0.00% 24.45MiB / 31.41GiB 0.08% 181kB / 11kB 0B / 0B 12 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, policy-api, grafana, kafka, policy-db-migrator, mariadb, simulator, compose_zookeeper_1, prometheus grafana | logger=settings t=2024-03-15T23:13:49.356409234Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2024-03-15T23:13:49Z grafana | logger=settings t=2024-03-15T23:13:49.356733643Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-03-15T23:13:49.356751093Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-03-15T23:13:49.356755673Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-03-15T23:13:49.356760734Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-03-15T23:13:49.356763484Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-03-15T23:13:49.356766444Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-03-15T23:13:49.356769744Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-03-15T23:13:49.356773664Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-03-15T23:13:49.356779454Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-03-15T23:13:49.356782084Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-03-15T23:13:49.356789334Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-03-15T23:13:49.356792704Z level=info msg=Target target=[all] grafana | logger=settings t=2024-03-15T23:13:49.35698344Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-03-15T23:13:49.35699686Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-03-15T23:13:49.3570047Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-03-15T23:13:49.357009261Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-03-15T23:13:49.357012991Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-03-15T23:13:49.357015941Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-03-15T23:13:49.35734746Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-03-15T23:13:49.357375651Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-03-15T23:13:49.358184834Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-03-15T23:13:49.359280845Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-03-15T23:13:49.360222071Z level=info msg="Migration successfully executed" id="create migration_log table" duration=940.956µs grafana | logger=migrator t=2024-03-15T23:13:49.364634466Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-03-15T23:13:49.365326446Z level=info msg="Migration successfully executed" id="create user table" duration=691.78µs grafana | logger=migrator t=2024-03-15T23:13:49.371245203Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-03-15T23:13:49.37255481Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.309027ms grafana | logger=migrator t=2024-03-15T23:13:49.377452698Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-03-15T23:13:49.378644172Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.190294ms grafana | logger=migrator t=2024-03-15T23:13:49.38247828Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-03-15T23:13:49.383602282Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.124402ms grafana | logger=migrator t=2024-03-15T23:13:49.388670065Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-03-15T23:13:49.389415416Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=744.941µs grafana | logger=migrator t=2024-03-15T23:13:49.392564495Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-03-15T23:13:49.395797206Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.233371ms grafana | logger=migrator t=2024-03-15T23:13:49.398794561Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-03-15T23:13:49.399626755Z level=info msg="Migration successfully executed" id="create user table v2" duration=831.594µs grafana | logger=migrator t=2024-03-15T23:13:49.403998298Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-03-15T23:13:49.404744839Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=740.241µs grafana | logger=migrator t=2024-03-15T23:13:49.407938479Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-03-15T23:13:49.40867821Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=742.311µs grafana | logger=migrator t=2024-03-15T23:13:49.411921462Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-03-15T23:13:49.412334244Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=412.342µs grafana | logger=migrator t=2024-03-15T23:13:49.417817509Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-03-15T23:13:49.418705634Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=887.596µs grafana | logger=migrator t=2024-03-15T23:13:49.423659394Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-03-15T23:13:49.425465265Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.809021ms grafana | logger=migrator t=2024-03-15T23:13:49.431845695Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-03-15T23:13:49.431873096Z level=info msg="Migration successfully executed" id="Update user table charset" duration=28.401µs grafana | logger=migrator t=2024-03-15T23:13:49.436663361Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-03-15T23:13:49.437933327Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.266926ms grafana | logger=migrator t=2024-03-15T23:13:49.446586201Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-03-15T23:13:49.446921401Z level=info msg="Migration successfully executed" id="Add missing user data" duration=334.94µs grafana | logger=migrator t=2024-03-15T23:13:49.450892433Z level=info msg="Executing migration" id="Add is_disabled column to user" zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-03-15 23:13:49,958] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,965] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,966] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,966] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,966] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,967] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-03-15 23:13:49,967] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-03-15 23:13:49,967] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-03-15 23:13:49,967] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-03-15 23:13:49,969] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-03-15 23:13:49,969] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,970] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,970] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,970] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,970] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-03-15 23:13:49,970] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-03-15 23:13:49,981] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-03-15 23:13:49,983] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-03-15 23:13:49,984] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-03-15 23:13:49,986] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-03-15 23:13:49,995] INFO (org.apache.zookeeper.server.ZooKeeperServer) policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | Connection to mariadb (172.17.0.5) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | zookeeper_1 | [2024-03-15 23:13:49,995] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,995] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,995] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,995] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,995] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,995] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,995] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,995] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,996] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:host.name=9d802bffc7ba (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=migrator t=2024-03-15T23:13:49.452805087Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.912364ms grafana | logger=migrator t=2024-03-15T23:13:49.45786322Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-03-15T23:13:49.458694473Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=831.103µs grafana | logger=migrator t=2024-03-15T23:13:49.464022134Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-03-15T23:13:49.465264549Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.241745ms grafana | logger=migrator t=2024-03-15T23:13:49.469461668Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-03-15T23:13:49.477451083Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.989075ms grafana | logger=migrator t=2024-03-15T23:13:49.481605331Z level=info msg="Executing migration" id="Add uid column to user" grafana | logger=migrator t=2024-03-15T23:13:49.482912948Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.305686ms grafana | logger=migrator t=2024-03-15T23:13:49.486599442Z level=info msg="Executing migration" id="Update uid column values for users" grafana | logger=migrator t=2024-03-15T23:13:49.486910621Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=310.578µs grafana | logger=migrator t=2024-03-15T23:13:49.492462407Z level=info msg="Executing migration" id="Add unique index user_uid" grafana | logger=migrator t=2024-03-15T23:13:49.493287991Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=825.094µs grafana | logger=migrator t=2024-03-15T23:13:49.497908761Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-03-15T23:13:49.499311971Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.40293ms grafana | logger=migrator t=2024-03-15T23:13:49.504462756Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-03-15T23:13:49.505180657Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=714.831µs grafana | logger=migrator t=2024-03-15T23:13:49.511377712Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-03-15T23:13:49.512556315Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.178023ms grafana | logger=migrator t=2024-03-15T23:13:49.520053917Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-03-15T23:13:49.521250751Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.196914ms grafana | logger=migrator t=2024-03-15T23:13:49.527159278Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-03-15T23:13:49.528529126Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.369298ms grafana | logger=migrator t=2024-03-15T23:13:49.534119874Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-03-15T23:13:49.534157555Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=38.971µs grafana | logger=migrator t=2024-03-15T23:13:49.54034935Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-03-15T23:13:49.541481432Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.125512ms grafana | logger=migrator t=2024-03-15T23:13:49.547293626Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-03-15T23:13:49.548060638Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=767.182µs grafana | logger=migrator t=2024-03-15T23:13:49.551605758Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-03-15T23:13:49.55238435Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=778.652µs grafana | logger=migrator t=2024-03-15T23:13:49.559010988Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-03-15T23:13:49.560138469Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.127612ms grafana | logger=migrator t=2024-03-15T23:13:49.564887244Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-03-15T23:13:49.567997081Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.109508ms grafana | logger=migrator t=2024-03-15T23:13:49.572409416Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-03-15T23:13:49.573501597Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.172103ms grafana | logger=migrator t=2024-03-15T23:13:49.581483152Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-03-15T23:13:49.583143179Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.665657ms grafana | logger=migrator t=2024-03-15T23:13:49.589660403Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-03-15T23:13:49.590529038Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=869.575µs grafana | logger=migrator t=2024-03-15T23:13:49.598343109Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-03-15T23:13:49.599662916Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.319107ms grafana | logger=migrator t=2024-03-15T23:13:49.603386751Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,997] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,998] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,999] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:49,999] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:50,000] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-03-15 23:13:50,001] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:50,001] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:50,002] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-03-15 23:13:50,002] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.604641707Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.255156ms grafana | logger=migrator t=2024-03-15T23:13:49.609443832Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-03-15T23:13:49.609958787Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=514.445µs grafana | logger=migrator t=2024-03-15T23:13:49.613236089Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-03-15T23:13:49.613893308Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=656.539µs grafana | logger=migrator t=2024-03-15T23:13:49.61715722Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-03-15T23:13:49.61785861Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=701.08µs grafana | logger=migrator t=2024-03-15T23:13:49.62174803Z level=info msg="Executing migration" id="create star table" policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.5:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.7:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-03-15T23:14:22.942+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-03-15T23:14:23.165+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 2f21b508-fe17-4ab8-9275-1762b58c9ac3 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-03-15T23:13:49.622961164Z level=info msg="Migration successfully executed" id="create star table" duration=1.212914ms grafana | logger=migrator t=2024-03-15T23:13:49.627717119Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-03-15T23:13:49.628555242Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=841.084µs grafana | logger=migrator t=2024-03-15T23:13:49.631923507Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-03-15T23:13:49.632798642Z level=info msg="Migration successfully executed" id="create org table v1" duration=874.755µs grafana | logger=migrator t=2024-03-15T23:13:49.636540818Z level=info msg="Executing migration" id="create index UQE_org_name - v1" grafana | logger=migrator t=2024-03-15T23:13:49.637792803Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.251595ms grafana | logger=migrator t=2024-03-15T23:13:49.642431214Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-03-15T23:13:49.643190106Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=758.262µs grafana | logger=migrator t=2024-03-15T23:13:49.654187436Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-03-15T23:13:49.655504404Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.316968ms grafana | logger=migrator t=2024-03-15T23:13:49.659341532Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-03-15T23:13:49.660724831Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.383099ms grafana | logger=migrator t=2024-03-15T23:13:49.66492221Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-03-15T23:13:49.666493524Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.571124ms grafana | logger=migrator t=2024-03-15T23:13:49.671158966Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-03-15T23:13:49.671277979Z level=info msg="Migration successfully executed" id="Update org table charset" duration=115.373µs grafana | logger=migrator t=2024-03-15T23:13:49.674468679Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-03-15T23:13:49.674597143Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=129.104µs grafana | logger=migrator t=2024-03-15T23:13:49.677842345Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-03-15T23:13:49.678280057Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=438.112µs grafana | logger=migrator t=2024-03-15T23:13:49.681864628Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-03-15T23:13:49.683169915Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.305457ms grafana | logger=migrator t=2024-03-15T23:13:49.687651612Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-03-15T23:13:49.688547177Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=895.305µs grafana | logger=migrator t=2024-03-15T23:13:49.691942203Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-03-15T23:13:49.69288657Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=942.777µs grafana | logger=migrator t=2024-03-15T23:13:49.696668757Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-03-15T23:13:49.6978639Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.194673ms grafana | logger=migrator t=2024-03-15T23:13:49.701293837Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-03-15T23:13:49.702176912Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=888.855µs grafana | logger=migrator t=2024-03-15T23:13:49.706514245Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-03-15T23:13:49.707288797Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=774.692µs grafana | logger=migrator t=2024-03-15T23:13:49.710772485Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-03-15T23:13:49.7159074Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.134375ms grafana | logger=migrator t=2024-03-15T23:13:49.72791989Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-03-15T23:13:49.728897317Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=980.358µs grafana | logger=migrator t=2024-03-15T23:13:49.73290287Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-03-15T23:13:49.733520348Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=617.428µs grafana | logger=migrator t=2024-03-15T23:13:49.736967755Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-03-15T23:13:49.737592903Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=624.768µs grafana | logger=migrator t=2024-03-15T23:13:49.740768733Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-03-15T23:13:49.741090302Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=321.15µs grafana | logger=migrator t=2024-03-15T23:13:49.745358542Z level=info msg="Executing migration" id="drop table dashboard_v1" policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql grafana | logger=migrator t=2024-03-15T23:13:49.746035081Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=677.289µs zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.749378276Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" kafka | ===> User zookeeper_1 | [2024-03-15 23:13:50,003] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' policy-api | Waiting for mariadb port 3306... policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.74951588Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=138.304µs kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | [2024-03-15 23:13:50,005] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-03-15 23:13:50,006] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json policy-api | mariadb (172.17.0.5:3306) open policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-db-migrator | policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.752821043Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" kafka | ===> Configuring ... policy-pap | Waiting for mariadb port 3306... zookeeper_1 | [2024-03-15 23:13:50,006] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) mariadb | 2024-03-15 23:13:44+00:00 [Note] [Entrypoint]: Initializing database files simulator | overriding logback.xml policy-api | Waiting for policy-db-migrator port 6824... policy-apex-pdp | sasl.mechanism = GSSAPI policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.754225533Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.40439ms kafka | Running in Zookeeper mode... policy-pap | mariadb (172.17.0.5:3306) open zookeeper_1 | [2024-03-15 23:13:50,006] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) mariadb | 2024-03-15 23:13:44 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) simulator | 2024-03-15 23:13:45,770 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json policy-api | policy-db-migrator (172.17.0.6:6824) open policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d kafka | ===> Running preflight checks ... policy-pap | Waiting for kafka port 9092... zookeeper_1 | [2024-03-15 23:13:50,006] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-03-15 23:13:44 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF simulator | 2024-03-15 23:13:45,826 INFO org.onap.policy.models.simulators starting policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-03-15T23:13:49.758444002Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" kafka | ===> Check if /var/lib/kafka/data is writable ... policy-pap | kafka (172.17.0.7:9092) open zookeeper_1 | [2024-03-15 23:13:50,027] INFO Logging initialized @502ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) mariadb | 2024-03-15 23:13:44 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. simulator | 2024-03-15 23:13:45,826 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties policy-api | policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-03-15T23:13:49.759742179Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.298056ms policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" kafka | ===> Check if Zookeeper is healthy ... policy-pap | Waiting for api port 6969... zookeeper_1 | [2024-03-15 23:13:50,115] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) mariadb | simulator | 2024-03-15 23:13:46,007 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION policy-api | . ____ _ __ _ _ policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-03-15T23:13:49.762952349Z level=info msg="Executing migration" id="Add column gnetId in dashboard" policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" kafka | SLF4J: Class path contains multiple SLF4J bindings. policy-pap | api (172.17.0.9:6969) open zookeeper_1 | [2024-03-15 23:13:50,115] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) mariadb | simulator | 2024-03-15 23:13:46,008 INFO org.onap.policy.models.simulators starting A&AI simulator policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:49.764277077Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.321878ms policy-db-migrator | -------------- prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml zookeeper_1 | [2024-03-15 23:13:50,134] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! simulator | 2024-03-15 23:13:46,114 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) prometheus | ts=2024-03-15T23:13:48.327Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" grafana | logger=migrator t=2024-03-15T23:13:49.767881718Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json zookeeper_1 | [2024-03-15 23:13:50,168] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) mariadb | To do so, start the server, then issue the following command: simulator | 2024-03-15 23:13:46,125 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- prometheus | ts=2024-03-15T23:13:48.336Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 grafana | logger=migrator t=2024-03-15T23:13:49.768492046Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=609.728µs kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. policy-pap | zookeeper_1 | [2024-03-15 23:13:50,168] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) mariadb | simulator | 2024-03-15 23:13:46,128 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.337Z caller=main.go:1118 level=info msg="Starting TSDB ..." grafana | logger=migrator t=2024-03-15T23:13:49.772662494Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] policy-pap | . ____ _ __ _ _ zookeeper_1 | [2024-03-15 23:13:50,170] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) mariadb | '/usr/bin/mysql_secure_installation' simulator | 2024-03-15 23:13:46,134 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-api | =========|_|==============|___/=/_/_/_/ policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" grafana | logger=migrator t=2024-03-15T23:13:49.774556317Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.893864ms kafka | [2024-03-15 23:13:51,514] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ zookeeper_1 | [2024-03-15 23:13:50,173] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) mariadb | simulator | 2024-03-15 23:13:46,190 INFO Session workerName=node0 policy-api | :: Spring Boot :: (v3.1.8) policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.06µs grafana | logger=migrator t=2024-03-15T23:13:49.778056256Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" kafka | [2024-03-15 23:13:51,515] INFO Client environment:host.name=4ceeac07ec8e (org.apache.zookeeper.ZooKeeper) policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ zookeeper_1 | [2024-03-15 23:13:50,181] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) mariadb | which will also give you the option of removing the test simulator | 2024-03-15 23:13:46,707 INFO Using GSON for REST calls policy-api | policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | -------------- prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" grafana | logger=migrator t=2024-03-15T23:13:49.77889733Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=840.514µs kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) zookeeper_1 | [2024-03-15 23:13:50,195] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) mariadb | databases and anonymous user created by default. This is simulator | 2024-03-15 23:13:46,807 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} policy-api | [2024-03-15T23:13:58.629+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 16 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-apex-pdp | security.providers = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 grafana | logger=migrator t=2024-03-15T23:13:49.782417229Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / zookeeper_1 | [2024-03-15 23:13:50,195] INFO Started @670ms (org.eclipse.jetty.server.Server) mariadb | strongly recommended for production servers. simulator | 2024-03-15 23:13:46,816 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-api | [2024-03-15T23:13:58.630+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | -------------- prometheus | ts=2024-03-15T23:13:48.342Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=31.161µs wal_replay_duration=416.304µs wbl_replay_duration=190ns total_replay_duration=475.286µs grafana | logger=migrator t=2024-03-15T23:13:49.783227292Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=809.833µs kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) policy-pap | =========|_|==============|___/=/_/_/_/ zookeeper_1 | [2024-03-15 23:13:50,195] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) mariadb | simulator | 2024-03-15 23:13:46,825 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1540ms policy-api | [2024-03-15T23:14:00.441+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-apex-pdp | session.timeout.ms = 45000 policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.342Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 grafana | logger=migrator t=2024-03-15T23:13:49.787361219Z level=info msg="Executing migration" id="Update dashboard table charset" policy-pap | :: Spring Boot :: (v3.1.8) kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,199] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb simulator | 2024-03-15 23:13:46,825 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4302 ms. policy-api | [2024-03-15T23:14:00.534+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 6 JPA repository interfaces. policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.348Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 grafana | logger=migrator t=2024-03-15T23:13:49.787485072Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=123.993µs policy-pap | kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,200] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) mariadb | simulator | 2024-03-15 23:13:46,835 INFO org.onap.policy.models.simulators starting SDNC simulator policy-api | [2024-03-15T23:14:00.959+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql prometheus | ts=2024-03-15T23:13:48.350Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC grafana | logger=migrator t=2024-03-15T23:13:49.790180268Z level=info msg="Executing migration" id="Update dashboard_tag table charset" policy-pap | [2024-03-15T23:14:11.924+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 29 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,202] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) mariadb | Please report any problems at https://mariadb.org/jira simulator | 2024-03-15 23:13:46,839 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-api | [2024-03-15T23:14:00.959+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-apex-pdp | ssl.cipher.suites = null policy-db-migrator | -------------- prometheus | ts=2024-03-15T23:13:48.350Z caller=main.go:1142 level=info msg="TSDB started" grafana | logger=migrator t=2024-03-15T23:13:49.790293652Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=112.724µs policy-pap | [2024-03-15T23:14:11.925+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" kafka | [2024-03-15 23:13:51,515] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,204] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) mariadb | simulator | 2024-03-15 23:13:46,839 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-api | [2024-03-15T23:14:01.653+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) prometheus | ts=2024-03-15T23:13:48.350Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml grafana | logger=migrator t=2024-03-15T23:13:49.794067098Z level=info msg="Executing migration" id="Add column folder_id in dashboard" policy-pap | [2024-03-15T23:14:13.936+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,219] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) mariadb | The latest information about MariaDB is available at https://mariadb.org/. simulator | 2024-03-15 23:13:46,840 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-api | [2024-03-15T23:14:01.665+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- prometheus | ts=2024-03-15T23:13:48.351Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=893.839µs db_storage=1.19µs remote_storage=1.4µs web_handler=700ns query_engine=1.14µs scrape=274.349µs scrape_sd=101.143µs notify=31.371µs notify_sd=24.261µs rules=1.67µs tracing=4.41µs grafana | logger=migrator t=2024-03-15T23:13:49.797295289Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.228761ms policy-pap | [2024-03-15T23:14:14.063+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 116 ms. Found 7 JPA repository interfaces. kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,219] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) mariadb | simulator | 2024-03-15 23:13:46,841 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-api | [2024-03-15T23:14:01.668+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-apex-pdp | ssl.engine.factory.class = null policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.351Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." grafana | logger=migrator t=2024-03-15T23:13:49.80260551Z level=info msg="Executing migration" id="Add column isFolder in dashboard" policy-pap | [2024-03-15T23:14:14.453+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,220] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) mariadb | Consider joining MariaDB's strong and vibrant community: simulator | 2024-03-15 23:13:46,855 INFO Session workerName=node0 policy-api | [2024-03-15T23:14:01.668+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-apex-pdp | ssl.key.password = null policy-db-migrator | prometheus | ts=2024-03-15T23:13:48.351Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." grafana | logger=migrator t=2024-03-15T23:13:49.804607506Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.020457ms policy-pap | [2024-03-15T23:14:14.453+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler kafka | [2024-03-15 23:13:51,515] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,221] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) mariadb | https://mariadb.org/get-involved/ simulator | 2024-03-15 23:13:46,923 INFO Using GSON for REST calls policy-api | [2024-03-15T23:14:01.763+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql grafana | logger=migrator t=2024-03-15T23:13:49.807541149Z level=info msg="Executing migration" id="Add column has_acl in dashboard" policy-pap | [2024-03-15T23:14:15.144+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) kafka | [2024-03-15 23:13:51,515] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,225] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) mariadb | simulator | 2024-03-15 23:13:46,934 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} policy-api | [2024-03-15T23:14:01.763+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3067 ms policy-apex-pdp | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.809415602Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.874163ms policy-pap | [2024-03-15T23:14:15.155+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] kafka | [2024-03-15 23:13:51,515] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,225] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) mariadb | 2024-03-15 23:13:45+00:00 [Note] [Entrypoint]: Database files initialized simulator | 2024-03-15 23:13:46,936 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} policy-api | [2024-03-15T23:14:02.210+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-apex-pdp | ssl.keystore.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-03-15T23:13:49.813558129Z level=info msg="Executing migration" id="Add column uid in dashboard" policy-pap | [2024-03-15T23:14:15.157+00:00|INFO|StandardService|main] Starting service [Tomcat] kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,228] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) mariadb | 2024-03-15 23:13:45+00:00 [Note] [Entrypoint]: Starting temporary server simulator | 2024-03-15 23:13:46,936 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1651ms policy-api | [2024-03-15T23:14:02.298+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-apex-pdp | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.815391321Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.831252ms policy-pap | [2024-03-15T23:14:15.157+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,229] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) mariadb | 2024-03-15 23:13:45+00:00 [Note] [Entrypoint]: Waiting for server startup policy-api | [2024-03-15T23:14:02.301+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer simulator | 2024-03-15 23:13:46,936 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4904 ms. policy-apex-pdp | ssl.keystore.password = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.818086477Z level=info msg="Executing migration" id="Update uid column values in dashboard" policy-pap | [2024-03-15T23:14:15.257+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext kafka | [2024-03-15 23:13:51,515] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,229] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) mariadb | 2024-03-15 23:13:45 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... policy-api | [2024-03-15T23:14:02.351+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled simulator | 2024-03-15 23:13:46,938 INFO org.onap.policy.models.simulators starting SO simulator policy-apex-pdp | ssl.keystore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.818288283Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=201.896µs policy-pap | [2024-03-15T23:14:15.257+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3249 ms kafka | [2024-03-15 23:13:51,518] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-03-15 23:13:50,239] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-api | [2024-03-15T23:14:02.716+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer simulator | 2024-03-15 23:13:46,941 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | ssl.protocol = TLSv1.3 policy-db-migrator | > upgrade 0450-pdpgroup.sql grafana | logger=migrator t=2024-03-15T23:13:49.82102637Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" policy-pap | [2024-03-15T23:14:15.686+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] kafka | [2024-03-15 23:13:51,522] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) zookeeper_1 | [2024-03-15 23:13:50,239] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Number of transaction pools: 1 policy-api | [2024-03-15T23:14:02.736+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... simulator | 2024-03-15 23:13:46,942 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | ssl.provider = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.821789932Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=762.911µs policy-pap | [2024-03-15T23:14:15.772+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 kafka | [2024-03-15 23:13:51,527] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) zookeeper_1 | [2024-03-15 23:13:50,252] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-api | [2024-03-15T23:14:02.842+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 simulator | 2024-03-15 23:13:46,944 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | ssl.secure.random.implementation = null policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) grafana | logger=migrator t=2024-03-15T23:13:49.826201766Z level=info msg="Executing migration" id="Remove unique index org_id_slug" policy-pap | [2024-03-15T23:14:15.775+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer kafka | [2024-03-15 23:13:51,535] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-03-15 23:13:50,253] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) mariadb | 2024-03-15 23:13:45 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-api | [2024-03-15T23:14:02.845+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. simulator | 2024-03-15 23:13:46,947 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.826889376Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=684.389µs policy-pap | [2024-03-15T23:14:15.814+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled kafka | [2024-03-15 23:13:51,562] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-03-15 23:13:51,591] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) mariadb | 2024-03-15 23:13:45 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-api | [2024-03-15T23:14:04.791+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) simulator | 2024-03-15 23:13:46,949 INFO Session workerName=node0 policy-apex-pdp | ssl.truststore.certificates = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.829852159Z level=info msg="Executing migration" id="Update dashboard title length" policy-pap | [2024-03-15T23:14:16.158+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer kafka | [2024-03-15 23:13:51,563] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) mariadb | 2024-03-15 23:13:45 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-api | [2024-03-15T23:14:04.794+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' simulator | 2024-03-15 23:13:46,997 INFO Using GSON for REST calls policy-apex-pdp | ssl.truststore.location = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.82988013Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.731µs policy-pap | [2024-03-15T23:14:16.177+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... kafka | [2024-03-15 23:13:51,572] INFO Socket connection established, initiating session, client: /172.17.0.7:44428, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-api | [2024-03-15T23:14:05.958+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml simulator | 2024-03-15 23:13:47,009 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} policy-apex-pdp | ssl.truststore.password = null policy-db-migrator | > upgrade 0460-pdppolicystatus.sql grafana | logger=migrator t=2024-03-15T23:13:49.83269736Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" policy-pap | [2024-03-15T23:14:16.289+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7b6e5c12 kafka | [2024-03-15 23:13:51,613] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000034dc50000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Completed initialization of buffer pool policy-api | [2024-03-15T23:14:06.826+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] simulator | 2024-03-15 23:13:47,011 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} policy-apex-pdp | ssl.truststore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.833655807Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=957.997µs policy-pap | [2024-03-15T23:14:16.291+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. kafka | [2024-03-15 23:13:51,744] INFO EventThread shut down for session: 0x10000034dc50000 (org.apache.zookeeper.ClientCnxn) mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-api | [2024-03-15T23:14:08.034+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning simulator | 2024-03-15 23:13:47,012 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1727ms policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-03-15T23:13:49.83765826Z level=info msg="Executing migration" id="create dashboard_provisioning" policy-pap | [2024-03-15T23:14:18.205+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) kafka | [2024-03-15 23:13:51,745] INFO Session: 0x10000034dc50000 closed (org.apache.zookeeper.ZooKeeper) mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: 128 rollback segments are active. policy-api | [2024-03-15T23:14:08.277+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2f84848e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@607c7f58, org.springframework.security.web.context.SecurityContextHolderFilter@7b3d759f, org.springframework.security.web.header.HeaderWriterFilter@15200332, org.springframework.security.web.authentication.logout.LogoutFilter@25e7e6d, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4c66b3d9, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@62c4ad40, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@9bc10bd, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@4bbb00a4, org.springframework.security.web.access.ExceptionTranslationFilter@4529b266, org.springframework.security.web.access.intercept.AuthorizationFilter@3413effc] simulator | 2024-03-15 23:13:47,012 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4931 ms. policy-apex-pdp | policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.838449692Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=790.512µs policy-pap | [2024-03-15T23:14:18.209+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' kafka | Using log4j config /etc/kafka/log4j.properties mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-api | [2024-03-15T23:14:09.215+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' simulator | 2024-03-15 23:13:47,014 INFO org.onap.policy.models.simulators starting VFC simulator policy-apex-pdp | [2024-03-15T23:14:23.335+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.841281832Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" policy-pap | [2024-03-15T23:14:18.751+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository kafka | ===> Launching ... mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-api | [2024-03-15T23:14:09.325+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] simulator | 2024-03-15 23:13:47,017 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | [2024-03-15T23:14:23.336+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.846760387Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.477765ms policy-pap | [2024-03-15T23:14:19.126+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository kafka | ===> Launching kafka ... mariadb | 2024-03-15 23:13:45 0 [Note] InnoDB: log sequence number 45452; transaction id 14 policy-api | [2024-03-15T23:14:09.364+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' simulator | 2024-03-15 23:13:47,017 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-03-15T23:14:23.336+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544463333 policy-db-migrator | > upgrade 0470-pdp.sql grafana | logger=migrator t=2024-03-15T23:13:49.850558684Z level=info msg="Executing migration" id="create dashboard_provisioning v2" policy-pap | [2024-03-15T23:14:19.240+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository kafka | [2024-03-15 23:13:52,506] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) mariadb | 2024-03-15 23:13:45 0 [Note] Plugin 'FEEDBACK' is disabled. policy-api | [2024-03-15T23:14:09.383+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.519 seconds (process running for 12.14) simulator | 2024-03-15 23:13:47,020 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-03-15T23:14:23.338+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-1, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.851299135Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=739.871µs policy-pap | [2024-03-15T23:14:19.518+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-03-15 23:13:52,873] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) mariadb | 2024-03-15 23:13:45 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-api | [2024-03-15T23:14:26.666+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' simulator | 2024-03-15 23:13:47,020 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 policy-apex-pdp | [2024-03-15T23:14:23.351+00:00|INFO|ServiceManager|main] service manager starting policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-03-15T23:13:49.854100744Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" policy-pap | allow.auto.create.topics = true kafka | [2024-03-15 23:13:52,954] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) mariadb | 2024-03-15 23:13:45 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. policy-api | [2024-03-15T23:14:26.666+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' simulator | 2024-03-15 23:13:47,033 INFO Session workerName=node0 policy-apex-pdp | [2024-03-15T23:14:23.352+00:00|INFO|ServiceManager|main] service manager starting topics policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.854909677Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=810.693µs policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-03-15 23:13:52,956] INFO starting (kafka.server.KafkaServer) mariadb | 2024-03-15 23:13:45 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. policy-api | [2024-03-15T23:14:26.668+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms simulator | 2024-03-15 23:13:47,074 INFO Using GSON for REST calls policy-apex-pdp | [2024-03-15T23:14:23.356+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f21b508-fe17-4ab8-9275-1762b58c9ac3, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.859344732Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" policy-pap | auto.include.jmx.reporter = true kafka | [2024-03-15 23:13:52,956] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) mariadb | 2024-03-15 23:13:45 0 [Note] mariadbd: ready for connections. policy-api | [2024-03-15T23:14:26.939+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: simulator | 2024-03-15 23:13:47,082 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} policy-apex-pdp | [2024-03-15T23:14:23.377+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.860125495Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=780.282µs policy-pap | auto.offset.reset = latest kafka | [2024-03-15 23:13:52,970] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution policy-api | [] simulator | 2024-03-15 23:13:47,083 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} policy-apex-pdp | allow.auto.create.topics = true policy-db-migrator | > upgrade 0480-pdpstatistics.sql grafana | logger=migrator t=2024-03-15T23:13:49.862900123Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-03-15 23:13:52,975] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) mariadb | 2024-03-15 23:13:46+00:00 [Note] [Entrypoint]: Temporary server started. simulator | 2024-03-15 23:13:47,084 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1799ms policy-apex-pdp | auto.commit.interval.ms = 5000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.863194221Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=293.888µs policy-pap | check.crcs = true kafka | [2024-03-15 23:13:52,975] INFO Client environment:host.name=4ceeac07ec8e (org.apache.zookeeper.ZooKeeper) mariadb | 2024-03-15 23:13:48+00:00 [Note] [Entrypoint]: Creating user policy_user simulator | 2024-03-15 23:13:47,084 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4935 ms. policy-apex-pdp | auto.include.jmx.reporter = true policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) grafana | logger=migrator t=2024-03-15T23:13:49.866007531Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) mariadb | 2024-03-15 23:13:48+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) simulator | 2024-03-15 23:13:47,085 INFO org.onap.policy.models.simulators started policy-apex-pdp | auto.offset.reset = latest policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.866529615Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=521.604µs policy-pap | client.id = consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-1 kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) mariadb | policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.87093487Z level=info msg="Executing migration" id="Add check_sum column" policy-pap | client.rack = kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) mariadb | 2024-03-15 23:13:48+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf policy-apex-pdp | check.crcs = true policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.872927546Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.992306ms policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 kafka | [2024-03-15 23:13:52,975] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql grafana | logger=migrator t=2024-03-15T23:13:49.876001043Z level=info msg="Executing migration" id="Add index for dashboard_title" mariadb | policy-pap | enable.auto.commit = true kafka | [2024-03-15 23:13:52,976] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | client.id = consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.876750884Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=749.661µs mariadb | 2024-03-15 23:13:48+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh policy-pap | exclude.internal.topics = true kafka | [2024-03-15 23:13:52,976] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) policy-apex-pdp | client.rack = policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-03-15T23:13:49.879489612Z level=info msg="Executing migration" id="delete tags for deleted dashboards" mariadb | #!/bin/bash -xv policy-pap | fetch.max.bytes = 52428800 kafka | [2024-03-15 23:13:52,976] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.879658286Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=168.514µs policy-apex-pdp | connections.max.idle.ms = 540000 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved policy-pap | fetch.max.wait.ms = 500 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.883901776Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-apex-pdp | default.api.timeout.ms = 60000 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. policy-pap | fetch.min.bytes = 1 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:49.884069541Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=167.825µs policy-apex-pdp | enable.auto.commit = true mariadb | # policy-pap | group.id = a833d76c-6968-4ee8-9b4d-b3fefbf07611 kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) policy-db-migrator | > upgrade 0500-pdpsubgroup.sql grafana | logger=migrator t=2024-03-15T23:13:49.88720848Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-apex-pdp | exclude.internal.topics = true mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); policy-pap | group.instance.id = null kafka | [2024-03-15 23:13:52,976] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:49.888534587Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.325437ms policy-apex-pdp | fetch.max.bytes = 52428800 mariadb | # you may not use this file except in compliance with the License. policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-03-15 23:13:52,976] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-03-15T23:13:49.891936023Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-apex-pdp | fetch.max.wait.ms = 500 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) mariadb | # You may obtain a copy of the License at policy-pap | interceptor.classes = [] kafka | [2024-03-15 23:13:52,976] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-03-15T23:13:49.895577756Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.643153ms policy-apex-pdp | fetch.min.bytes = 1 policy-db-migrator | -------------- mariadb | # policy-pap | internal.leave.group.on.close = true kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-03-15T23:13:49.900078153Z level=info msg="Executing migration" id="create data_source table" policy-apex-pdp | group.id = 2f21b508-fe17-4ab8-9275-1762b58c9ac3 policy-db-migrator | mariadb | # http://www.apache.org/licenses/LICENSE-2.0 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-03-15T23:13:49.901218836Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.140362ms policy-apex-pdp | group.instance.id = null policy-db-migrator | mariadb | # policy-pap | isolation.level = read_uncommitted kafka | [2024-03-15 23:13:52,976] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-03-15T23:13:49.906040772Z level=info msg="Executing migration" id="add index data_source.account_id" policy-apex-pdp | heartbeat.interval.ms = 3000 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql mariadb | # Unless required by applicable law or agreed to in writing, software policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-03-15 23:13:52,978] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-03-15T23:13:49.906613848Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=573.716µs policy-apex-pdp | interceptor.classes = [] policy-db-migrator | -------------- mariadb | # distributed under the License is distributed on an "AS IS" BASIS, policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-03-15 23:13:52,981] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) grafana | logger=migrator t=2024-03-15T23:13:49.909828469Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" policy-apex-pdp | internal.leave.group.on.close = true policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. policy-pap | max.poll.interval.ms = 300000 kafka | [2024-03-15 23:13:52,989] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-03-15T23:13:49.910386095Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=557.425µs policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | -------------- mariadb | # See the License for the specific language governing permissions and policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-03-15T23:13:49.918979757Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-apex-pdp | isolation.level = read_uncommitted kafka | [2024-03-15 23:13:52,993] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) policy-db-migrator | mariadb | # limitations under the License. policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-03-15T23:13:49.920540951Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.562094ms policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-03-15 23:13:52,995] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) policy-db-migrator | mariadb | policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-03-15T23:13:49.923859745Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" policy-apex-pdp | max.partition.fetch.bytes = 1048576 kafka | [2024-03-15 23:13:53,003] INFO Socket connection established, initiating session, client: /172.17.0.7:44430, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-03-15T23:13:49.92472914Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=868.625µs policy-apex-pdp | max.poll.interval.ms = 300000 kafka | [2024-03-15 23:13:53,010] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000034dc50001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) policy-db-migrator | -------------- mariadb | do policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-03-15T23:13:49.928041953Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" policy-apex-pdp | max.poll.records = 500 kafka | [2024-03-15 23:13:53,013] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:49.934189077Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.147524ms policy-apex-pdp | metadata.max.age.ms = 300000 kafka | [2024-03-15 23:13:53,343] INFO Cluster ID = LbZnmjPNTK-gKtiXPvevcA (kafka.server.KafkaServer) policy-db-migrator | -------------- mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-03-15T23:13:49.938572241Z level=info msg="Executing migration" id="create data_source table v2" policy-apex-pdp | metric.reporters = [] kafka | [2024-03-15 23:13:53,347] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) policy-db-migrator | mariadb | done policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-03-15T23:13:49.939685312Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.112811ms policy-apex-pdp | metrics.num.samples = 2 kafka | [2024-03-15 23:13:53,406] INFO KafkaConfig values: policy-db-migrator | mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-03-15T23:13:49.942820631Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" policy-apex-pdp | metrics.recording.level = INFO kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-03-15T23:13:49.943819069Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=995.268µs policy-apex-pdp | metrics.sample.window.ms = 30000 kafka | alter.config.policy.class.name = null policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:49.947004139Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | alter.log.dirs.replication.quota.window.num = 11 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:49.948154692Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.149852ms policy-apex-pdp | receive.buffer.bytes = 65536 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-03-15T23:13:49.952913106Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" policy-apex-pdp | reconnect.backoff.max.ms = 1000 kafka | authorizer.class.name = policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-03-15T23:13:49.95339288Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=479.453µs policy-apex-pdp | reconnect.backoff.ms = 50 kafka | auto.create.topics.enable = true policy-db-migrator | mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-03-15T23:13:49.956688243Z level=info msg="Executing migration" id="Add column with_credentials" policy-apex-pdp | request.timeout.ms = 30000 kafka | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-03-15T23:13:49.958880865Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.192452ms policy-apex-pdp | retry.backoff.ms = 100 kafka | auto.leader.rebalance.enable = true policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-03-15T23:13:49.962909008Z level=info msg="Executing migration" id="Add secure json data column" policy-apex-pdp | sasl.client.callback.handler.class = null kafka | background.threads = 10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-03-15T23:13:49.965060939Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.151331ms policy-apex-pdp | sasl.jaas.config = null kafka | broker.heartbeat.interval.ms = 2000 policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-03-15T23:13:49.970705919Z level=info msg="Executing migration" id="Update data_source table charset" policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | broker.id = 1 policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-03-15T23:13:49.97073817Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=34.381µs policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 kafka | broker.id.generation.enable = true policy-db-migrator | mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-03-15T23:13:49.973695283Z level=info msg="Executing migration" id="Update initial version to 1" policy-apex-pdp | sasl.kerberos.service.name = null kafka | broker.rack = null policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-03-15T23:13:49.973920789Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=225.426µs policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | broker.session.timeout.ms = 9000 policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-03-15T23:13:49.977019977Z level=info msg="Executing migration" id="Add read_only data column" policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | client.quota.callback.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-03-15T23:13:49.97923766Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.218863ms policy-apex-pdp | sasl.login.callback.handler.class = null kafka | compression.type = producer policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-03-15T23:13:49.983794728Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" policy-apex-pdp | sasl.login.class = null kafka | connection.failed.authentication.delay.ms = 100 policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-03-15T23:13:49.983939063Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=144.534µs policy-apex-pdp | sasl.login.connect.timeout.ms = null kafka | connections.max.idle.ms = 600000 policy-db-migrator | mariadb | policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-03-15T23:13:49.987841543Z level=info msg="Executing migration" id="Update json_data with nulls" policy-apex-pdp | sasl.login.read.timeout.ms = null kafka | connections.max.reauth.ms = 0 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:49.987954246Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=112.493µs policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 kafka | control.plane.listener.name = null policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:49.990247721Z level=info msg="Executing migration" id="Add uid column" policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 kafka | controlled.shutdown.enable = true policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql grafana | logger=migrator t=2024-03-15T23:13:49.992505675Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.257283ms policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | controlled.shutdown.max.retries = 3 policy-db-migrator | -------------- mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp grafana | logger=migrator t=2024-03-15T23:13:49.995595282Z level=info msg="Executing migration" id="Update uid value" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 kafka | controlled.shutdown.retry.backoff.ms = 5000 policy-db-migrator | mariadb | grafana | logger=migrator t=2024-03-15T23:13:49.995828988Z level=info msg="Migration successfully executed" id="Update uid value" duration=249.537µs policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 kafka | controller.listener.names = null policy-db-migrator | mariadb | 2024-03-15 23:13:49+00:00 [Note] [Entrypoint]: Stopping temporary server grafana | logger=migrator t=2024-03-15T23:13:50.000678195Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.login.retry.backoff.ms = 100 kafka | controller.quorum.append.linger.ms = 25 policy-db-migrator | > upgrade 0570-toscadatatype.sql mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd (initiated by: unknown): Normal shutdown grafana | logger=migrator t=2024-03-15T23:13:50.001488058Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=808.923µs policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.mechanism = GSSAPI kafka | controller.quorum.election.backoff.max.ms = 1000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: FTS optimize thread exiting. grafana | logger=migrator t=2024-03-15T23:13:50.013065075Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 kafka | controller.quorum.election.timeout.ms = 1000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Starting shutdown... grafana | logger=migrator t=2024-03-15T23:13:50.014000925Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=935.411µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.expected.audience = null kafka | controller.quorum.fetch.timeout.ms = 2000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool grafana | logger=migrator t=2024-03-15T23:13:50.017165367Z level=info msg="Executing migration" id="create api_key table" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null kafka | controller.quorum.request.timeout.ms = 2000 policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Buffer pool(s) dump completed at 240315 23:13:49 grafana | logger=migrator t=2024-03-15T23:13:50.018172229Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.006692ms policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | controller.quorum.retry.backoff.ms = 20 policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" grafana | logger=migrator t=2024-03-15T23:13:50.022727305Z level=info msg="Executing migration" id="add index api_key.account_id" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | controller.quorum.voters = [] policy-db-migrator | > upgrade 0580-toscadatatypes.sql mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Shutdown completed; log sequence number 381724; transaction id 298 grafana | logger=migrator t=2024-03-15T23:13:50.023681095Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=953.22µs policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | controller.quota.window.num = 11 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd: Shutdown complete grafana | logger=migrator t=2024-03-15T23:13:50.026663211Z level=info msg="Executing migration" id="add index api_key.key" policy-pap | security.protocol = PLAINTEXT policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | controller.quota.window.size.seconds = 1 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) mariadb | grafana | logger=migrator t=2024-03-15T23:13:50.027341143Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=676.962µs policy-pap | security.providers = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | controller.socket.timeout.ms = 30000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49+00:00 [Note] [Entrypoint]: Temporary server stopped grafana | logger=migrator t=2024-03-15T23:13:50.031044432Z level=info msg="Executing migration" id="add index api_key.account_id_name" policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | create.topic.policy.class.name = null policy-db-migrator | mariadb | policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-03-15T23:13:50.031791176Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=746.604µs policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | default.replication.factor = 1 policy-db-migrator | mariadb | 2024-03-15 23:13:49+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:50.03564462Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" policy-apex-pdp | security.protocol = PLAINTEXT kafka | delegation.token.expiry.check.interval.ms = 3600000 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql mariadb | policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:50.036355893Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=711.853µs policy-apex-pdp | security.providers = null kafka | delegation.token.expiry.time.ms = 86400000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-03-15T23:13:50.040635191Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-apex-pdp | send.buffer.bytes = 131072 kafka | delegation.token.master.key = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-03-15T23:13:50.042246373Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.613072ms policy-apex-pdp | session.timeout.ms = 45000 kafka | delegation.token.max.lifetime.ms = 604800000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Number of transaction pools: 1 policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-03-15T23:13:50.048336129Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | delegation.token.secret.key = null policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-03-15T23:13:50.049238108Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=899.669µs policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | delete.records.purgatory.purge.interval.requests = 1 policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-03-15T23:13:50.054033212Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" policy-apex-pdp | ssl.cipher.suites = null kafka | delete.topic.enable = true policy-db-migrator | > upgrade 0600-toscanodetemplate.sql mariadb | 2024-03-15 23:13:49 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-03-15T23:13:50.061361328Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=7.330916ms policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | early.start.listeners = null policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-03-15T23:13:50.06452706Z level=info msg="Executing migration" id="create api_key table v2" policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | fetch.max.bytes = 57671680 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-03-15T23:13:50.065068757Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=543.577µs policy-apex-pdp | ssl.engine.factory.class = null kafka | fetch.purgatory.purge.interval.requests = 1000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Completed initialization of buffer pool policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-03-15T23:13:50.069009774Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" policy-apex-pdp | ssl.key.password = null kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-03-15T23:13:50.069829841Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=816.036µs policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | group.consumer.heartbeat.interval.ms = 5000 policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: 128 rollback segments are active. policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-03-15T23:13:50.072895189Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | group.consumer.max.heartbeat.interval.ms = 15000 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-03-15T23:13:50.073642243Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=747.174µs policy-apex-pdp | ssl.keystore.key = null kafka | group.consumer.max.session.timeout.ms = 60000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-pap | ssl.provider = null grafana | logger=migrator t=2024-03-15T23:13:50.076763774Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-apex-pdp | ssl.keystore.location = null kafka | group.consumer.max.size = 2147483647 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: log sequence number 381724; transaction id 299 policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-03-15T23:13:50.077511398Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=749.254µs policy-apex-pdp | ssl.keystore.password = null kafka | group.consumer.min.heartbeat.interval.ms = 5000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-03-15T23:13:50.081323001Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-apex-pdp | ssl.keystore.type = JKS kafka | group.consumer.min.session.timeout.ms = 45000 policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] Plugin 'FEEDBACK' is disabled. policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-03-15T23:13:50.081645741Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=323.161µs policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | group.consumer.session.timeout.ms = 45000 policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-03-15T23:13:50.085168884Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-apex-pdp | ssl.provider = null kafka | group.coordinator.new.enable = false policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql mariadb | 2024-03-15 23:13:49 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-03-15T23:13:50.08658497Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.415796ms policy-apex-pdp | ssl.secure.random.implementation = null kafka | group.coordinator.threads = 1 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] Server socket created on IP: '0.0.0.0'. policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-03-15T23:13:50.09001474Z level=info msg="Executing migration" id="Update api_key table charset" policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | group.initial.rebalance.delay.ms = 3000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) mariadb | 2024-03-15 23:13:49 0 [Note] Server socket created on IP: '::'. policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-03-15T23:13:50.090064382Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=50.882µs policy-apex-pdp | ssl.truststore.certificates = null kafka | group.max.session.timeout.ms = 1800000 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:49 0 [Note] mariadbd: ready for connections. policy-pap | grafana | logger=migrator t=2024-03-15T23:13:50.094249877Z level=info msg="Executing migration" id="Add expires to api_key table" policy-apex-pdp | ssl.truststore.location = null kafka | group.max.size = 2147483647 policy-db-migrator | mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution policy-pap | [2024-03-15T23:14:19.675+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-03-15T23:13:50.096943273Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.688196ms policy-apex-pdp | ssl.truststore.password = null kafka | group.min.session.timeout.ms = 6000 policy-db-migrator | mariadb | 2024-03-15 23:13:49 0 [Note] InnoDB: Buffer pool(s) load completed at 240315 23:13:49 policy-pap | [2024-03-15T23:14:19.676+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-03-15T23:13:50.100479647Z level=info msg="Executing migration" id="Add service account foreign key" policy-apex-pdp | ssl.truststore.type = JKS kafka | initial.broker.registration.timeout.ms = 60000 policy-db-migrator | > upgrade 0630-toscanodetype.sql mariadb | 2024-03-15 23:13:50 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) policy-pap | [2024-03-15T23:14:19.676+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544459674 grafana | logger=migrator t=2024-03-15T23:13:50.104537278Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.055231ms policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | inter.broker.listener.name = PLAINTEXT policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:51 52 [Warning] Aborted connection 52 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) policy-pap | [2024-03-15T23:14:19.678+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-1, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-03-15T23:13:50.107767212Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-apex-pdp | kafka | inter.broker.protocol.version = 3.6-IV2 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) mariadb | 2024-03-15 23:13:52 97 [Warning] Aborted connection 97 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) policy-pap | [2024-03-15T23:14:19.679+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-03-15T23:13:50.107939177Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=172.195µs policy-apex-pdp | [2024-03-15T23:14:23.386+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | kafka.metrics.polling.interval.secs = 10 policy-db-migrator | -------------- mariadb | 2024-03-15 23:13:53 144 [Warning] Aborted connection 144 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-03-15T23:13:50.113957071Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-apex-pdp | [2024-03-15T23:14:23.386+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | kafka.metrics.reporters = [] policy-db-migrator | policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-03-15T23:13:50.116852164Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.897273ms policy-apex-pdp | [2024-03-15T23:14:23.386+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544463386 kafka | leader.imbalance.check.interval.seconds = 300 policy-db-migrator | policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-03-15T23:13:50.120956256Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-apex-pdp | [2024-03-15T23:14:23.387+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Subscribed to topic(s): policy-pdp-pap kafka | leader.imbalance.per.broker.percentage = 10 policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-03-15T23:13:50.123659193Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.702737ms policy-apex-pdp | [2024-03-15T23:14:23.392+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c99ced55-aa2f-48db-bfd1-cad73b9b866f, alive=false, publisher=null]]: starting kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-03-15T23:13:50.127036192Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-apex-pdp | [2024-03-15T23:14:23.408+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-pap | check.crcs = true grafana | logger=migrator t=2024-03-15T23:13:50.127886619Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=850.167µs policy-apex-pdp | acks = -1 kafka | log.cleaner.backoff.ms = 15000 policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-03-15T23:13:50.130968578Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-apex-pdp | auto.include.jmx.reporter = true kafka | log.cleaner.dedupe.buffer.size = 134217728 policy-db-migrator | policy-pap | client.id = consumer-policy-pap-2 grafana | logger=migrator t=2024-03-15T23:13:50.131521796Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=552.928µs policy-apex-pdp | batch.size = 16384 kafka | log.cleaner.delete.retention.ms = 86400000 policy-db-migrator | policy-pap | client.rack = grafana | logger=migrator t=2024-03-15T23:13:50.135857006Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-apex-pdp | bootstrap.servers = [kafka:9092] kafka | log.cleaner.enable = true policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-03-15T23:13:50.136700493Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=840.777µs policy-apex-pdp | buffer.memory = 33554432 kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-db-migrator | -------------- policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-03-15T23:13:50.14003066Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-apex-pdp | client.dns.lookup = use_all_dns_ips kafka | log.cleaner.io.buffer.size = 524288 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-03-15T23:13:50.140858897Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=827.917µs policy-apex-pdp | client.id = producer-1 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-db-migrator | -------------- policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-03-15T23:13:50.144953189Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-apex-pdp | compression.type = none kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-db-migrator | policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-03-15T23:13:50.145812836Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=861.088µs policy-apex-pdp | connections.max.idle.ms = 540000 kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-db-migrator | policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-03-15T23:13:50.150362883Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-apex-pdp | delivery.timeout.ms = 120000 kafka | log.cleaner.min.compaction.lag.ms = 0 policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-03-15T23:13:50.152263134Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.900642ms policy-apex-pdp | enable.idempotence = true kafka | log.cleaner.threads = 1 policy-db-migrator | -------------- policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-03-15T23:13:50.156107608Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-apex-pdp | interceptor.classes = [] kafka | log.cleanup.policy = [delete] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-03-15T23:13:50.156324195Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=70.213µs policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | log.dir = /tmp/kafka-logs policy-db-migrator | -------------- policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-03-15T23:13:50.159792836Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-apex-pdp | linger.ms = 0 kafka | log.dirs = /var/lib/kafka/data policy-db-migrator | policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-03-15T23:13:50.159819857Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=27.381µs policy-apex-pdp | max.block.ms = 60000 kafka | log.flush.interval.messages = 9223372036854775807 policy-db-migrator | policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-03-15T23:13:50.164075134Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-apex-pdp | max.in.flight.requests.per.connection = 5 kafka | log.flush.interval.ms = null policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-03-15T23:13:50.1673709Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.294126ms policy-apex-pdp | max.request.size = 1048576 kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted policy-apex-pdp | metadata.max.age.ms = 300000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 grafana | logger=migrator t=2024-03-15T23:13:50.171675909Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | metadata.max.idle.ms = 300000 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 grafana | logger=migrator t=2024-03-15T23:13:50.175221603Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.544664ms policy-db-migrator | -------------- policy-pap | max.partition.fetch.bytes = 1048576 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 grafana | logger=migrator t=2024-03-15T23:13:50.179020835Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-apex-pdp | metrics.recording.level = INFO policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.179087647Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=67.292µs policy-pap | metadata.max.age.ms = 300000 kafka | log.index.interval.bytes = 4096 policy-apex-pdp | metrics.sample.window.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.184500601Z level=info msg="Executing migration" id="create quota table v1" policy-pap | metric.reporters = [] kafka | log.index.size.max.bytes = 10485760 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql grafana | logger=migrator t=2024-03-15T23:13:50.18539075Z level=info msg="Migration successfully executed" id="create quota table v1" duration=889.629µs policy-pap | metrics.num.samples = 2 kafka | log.local.retention.bytes = -2 policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.188780969Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-pap | metrics.recording.level = INFO kafka | log.local.retention.ms = -2 policy-apex-pdp | partitioner.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-03-15T23:13:50.189789552Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.006952ms policy-pap | metrics.sample.window.ms = 30000 kafka | log.message.downconversion.enable = true policy-apex-pdp | partitioner.ignore.keys = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.193307615Z level=info msg="Executing migration" id="Update quota table charset" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | log.message.format.version = 3.0-IV1 policy-apex-pdp | receive.buffer.bytes = 32768 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.193345856Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=42.731µs policy-pap | receive.buffer.bytes = 65536 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.196760436Z level=info msg="Executing migration" id="create plugin_setting table" policy-pap | reconnect.backoff.max.ms = 1000 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | > upgrade 0690-toscapolicy.sql grafana | logger=migrator t=2024-03-15T23:13:50.197742788Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=984.511µs policy-pap | reconnect.backoff.ms = 50 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.203155942Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-pap | request.timeout.ms = 30000 kafka | log.message.timestamp.type = CreateTime policy-apex-pdp | retries = 2147483647 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) grafana | logger=migrator t=2024-03-15T23:13:50.203924607Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=768.304µs policy-pap | retry.backoff.ms = 100 kafka | log.preallocate = false policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.208310898Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-pap | sasl.client.callback.handler.class = null kafka | log.retention.bytes = -1 policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.211014725Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.703827ms policy-pap | sasl.jaas.config = null kafka | log.retention.check.interval.ms = 300000 policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.214103364Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | log.retention.hours = 168 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | > upgrade 0700-toscapolicytype.sql grafana | logger=migrator t=2024-03-15T23:13:50.214131485Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=28.441µs policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | log.retention.minutes = null policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.218285529Z level=info msg="Executing migration" id="create session table" policy-pap | sasl.kerberos.service.name = null kafka | log.retention.ms = null policy-apex-pdp | sasl.kerberos.service.name = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) grafana | logger=migrator t=2024-03-15T23:13:50.219081724Z level=info msg="Migration successfully executed" id="create session table" duration=795.815µs policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | log.roll.hours = 168 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.223536708Z level=info msg="Executing migration" id="Drop old table playlist table" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | log.roll.jitter.hours = 0 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.22361566Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=79.412µs policy-pap | sasl.login.callback.handler.class = null kafka | log.roll.jitter.ms = null policy-apex-pdp | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.226034118Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-pap | sasl.login.class = null kafka | log.roll.ms = null policy-apex-pdp | sasl.login.class = null policy-db-migrator | > upgrade 0710-toscapolicytypes.sql grafana | logger=migrator t=2024-03-15T23:13:50.22609575Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=61.772µs policy-pap | sasl.login.connect.timeout.ms = null kafka | log.segment.bytes = 1073741824 policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.231153993Z level=info msg="Executing migration" id="create playlist table v2" policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.read.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) grafana | logger=migrator t=2024-03-15T23:13:50.23198899Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=834.587µs policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-03-15T23:13:50.236743043Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-03-15T23:13:50.237701574Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=959.141µs policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-03-15T23:13:50.241347301Z level=info msg="Executing migration" id="Update playlist table charset" policy-pap | sasl.mechanism = GSSAPI kafka | log.segment.delete.delay.ms = 60000 policy-db-migrator | policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-03-15T23:13:50.241377102Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=30.801µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | max.connection.creation.rate = 2147483647 policy-db-migrator | policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-03-15T23:13:50.245734952Z level=info msg="Executing migration" id="Update playlist_item table charset" policy-pap | sasl.oauthbearer.expected.audience = null kafka | max.connections = 2147483647 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:50.245786924Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=55.122µs policy-pap | sasl.oauthbearer.expected.issuer = null kafka | max.connections.per.ip = 2147483647 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:50.250899038Z level=info msg="Executing migration" id="Add playlist column created_at" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | max.connections.per.ip.overrides = policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-03-15T23:13:50.255858278Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.95837ms policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | max.incremental.fetch.session.cache.slots = 1000 policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-03-15T23:13:50.259314319Z level=info msg="Executing migration" id="Add playlist column updated_at" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | message.max.bytes = 1048588 policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-03-15T23:13:50.263405901Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=4.071641ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-03-15T23:13:50.267635737Z level=info msg="Executing migration" id="drop preferences table v2" kafka | metadata.log.dir = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | > upgrade 0730-toscaproperty.sql grafana | logger=migrator t=2024-03-15T23:13:50.267765481Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=132.544µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.271942836Z level=info msg="Executing migration" id="drop preferences table v3" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | metadata.log.segment.bytes = 1073741824 policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-03-15T23:13:50.272026028Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=83.612µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null kafka | metadata.log.segment.min.bytes = 8388608 policy-pap | security.protocol = PLAINTEXT policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.284431088Z level=info msg="Executing migration" id="create preferences table v3" policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | metadata.log.segment.ms = 604800000 policy-pap | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.285486482Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.057454ms policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | metadata.max.idle.interval.ms = 500 policy-pap | send.buffer.bytes = 131072 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.290782042Z level=info msg="Executing migration" id="Update preferences table charset" policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | metadata.max.retention.bytes = 104857600 policy-pap | session.timeout.ms = 45000 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql grafana | logger=migrator t=2024-03-15T23:13:50.290851394Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=75.732µs policy-apex-pdp | security.protocol = PLAINTEXT kafka | metadata.max.retention.ms = 604800000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.295850205Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-apex-pdp | security.providers = null kafka | metric.reporters = [] policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) grafana | logger=migrator t=2024-03-15T23:13:50.30126918Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.420094ms policy-apex-pdp | send.buffer.bytes = 131072 kafka | metrics.num.samples = 2 policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.305901329Z level=info msg="Executing migration" id="Update team_id column values in preferences" policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | metrics.recording.level = INFO policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.306050264Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=149.414µs policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | metrics.sample.window.ms = 30000 policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.308970247Z level=info msg="Executing migration" id="Add column week_start in preferences" policy-apex-pdp | ssl.cipher.suites = null kafka | min.insync.replicas = 1 policy-pap | ssl.engine.factory.class = null policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql grafana | logger=migrator t=2024-03-15T23:13:50.312069287Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.09603ms policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.key.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.318887517Z level=info msg="Executing migration" id="Add column preferences.json_data" policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | node.id = 1 policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-03-15T23:13:50.323824096Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.934478ms policy-apex-pdp | ssl.engine.factory.class = null kafka | num.io.threads = 8 policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.327319108Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" policy-apex-pdp | ssl.key.password = null kafka | num.network.threads = 3 policy-pap | ssl.keystore.key = null policy-db-migrator | policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-03-15T23:13:50.327426302Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=108.663µs policy-db-migrator | kafka | num.partitions = 1 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-03-15T23:13:50.330980026Z level=info msg="Executing migration" id="Add preferences index org_id" policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql kafka | num.recovery.threads.per.data.dir = 1 policy-apex-pdp | ssl.keystore.key = null policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-03-15T23:13:50.331890215Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=910.029µs policy-db-migrator | -------------- kafka | num.replica.alter.log.dirs.threads = null policy-apex-pdp | ssl.keystore.location = null policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-03-15T23:13:50.337036361Z level=info msg="Executing migration" id="Add preferences index user_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | num.replica.fetchers = 1 policy-apex-pdp | ssl.keystore.password = null policy-pap | ssl.provider = null grafana | logger=migrator t=2024-03-15T23:13:50.338683104Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.644813ms policy-db-migrator | -------------- kafka | offset.metadata.max.bytes = 4096 policy-apex-pdp | ssl.keystore.type = JKS policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-03-15T23:13:50.361132796Z level=info msg="Executing migration" id="create alert table v1" policy-db-migrator | kafka | offsets.commit.required.acks = -1 policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-03-15T23:13:50.362867982Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.739116ms policy-db-migrator | kafka | offsets.commit.timeout.ms = 5000 policy-apex-pdp | ssl.provider = null policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-03-15T23:13:50.372189302Z level=info msg="Executing migration" id="add index alert org_id & id " policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | offsets.load.buffer.size = 5242880 policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-03-15T23:13:50.373843485Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.654063ms policy-db-migrator | -------------- kafka | offsets.retention.check.interval.ms = 600000 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-03-15T23:13:50.377324488Z level=info msg="Executing migration" id="add index alert state" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) kafka | offsets.retention.minutes = 10080 policy-apex-pdp | ssl.truststore.certificates = null policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-03-15T23:13:50.378281388Z level=info msg="Migration successfully executed" id="add index alert state" duration=956.901µs policy-db-migrator | -------------- kafka | offsets.topic.compression.codec = 0 policy-apex-pdp | ssl.truststore.location = null policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-03-15T23:13:50.381660167Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-db-migrator | kafka | offsets.topic.num.partitions = 50 policy-apex-pdp | ssl.truststore.password = null policy-pap | grafana | logger=migrator t=2024-03-15T23:13:50.382646679Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=986.542µs policy-db-migrator | kafka | offsets.topic.replication.factor = 1 policy-apex-pdp | ssl.truststore.type = JKS policy-pap | [2024-03-15T23:14:19.684+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-03-15T23:13:50.391324378Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-db-migrator | > upgrade 0780-toscarequirements.sql kafka | offsets.topic.segment.bytes = 104857600 policy-apex-pdp | transaction.timeout.ms = 60000 policy-pap | [2024-03-15T23:14:19.684+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-03-15T23:13:50.392338071Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.011673ms policy-db-migrator | -------------- kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-apex-pdp | transactional.id = null policy-pap | [2024-03-15T23:14:19.684+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544459684 grafana | logger=migrator t=2024-03-15T23:13:50.399401048Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) kafka | password.encoder.iterations = 4096 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | [2024-03-15T23:14:19.685+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-03-15T23:13:50.401729073Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=2.329075ms policy-db-migrator | -------------- kafka | password.encoder.key.length = 128 policy-apex-pdp | grafana | logger=migrator t=2024-03-15T23:13:50.405818555Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-db-migrator | policy-pap | [2024-03-15T23:14:20.012+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json kafka | password.encoder.keyfactory.algorithm = null policy-apex-pdp | [2024-03-15T23:14:23.418+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-03-15T23:13:50.406765535Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=947.11µs policy-db-migrator | policy-pap | [2024-03-15T23:14:20.192+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning kafka | password.encoder.old.secret = null policy-apex-pdp | [2024-03-15T23:14:23.440+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-03-15T23:13:50.412127908Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-pap | [2024-03-15T23:14:20.474+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@55cb3b7, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@497fd334, org.springframework.security.web.context.SecurityContextHolderFilter@7ce4498f, org.springframework.security.web.header.HeaderWriterFilter@176e839e, org.springframework.security.web.authentication.logout.LogoutFilter@6e489bb8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@6787bd41, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7bd7d71c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@ce0bbd5, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@280c3dc0, org.springframework.security.web.access.ExceptionTranslationFilter@60fe75f7, org.springframework.security.web.access.intercept.AuthorizationFilter@3d3b852e] kafka | password.encoder.secret = null policy-apex-pdp | [2024-03-15T23:14:23.440+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-03-15T23:13:50.425466487Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.337929ms policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.429+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder policy-apex-pdp | [2024-03-15T23:14:23.441+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544463440 grafana | logger=migrator t=2024-03-15T23:13:50.430403466Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | [2024-03-15T23:14:21.555+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | process.roles = [] policy-apex-pdp | [2024-03-15T23:14:23.441+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c99ced55-aa2f-48db-bfd1-cad73b9b866f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-03-15T23:13:50.431133979Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=730.373µs policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.574+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' kafka | producer.id.expiration.check.interval.ms = 600000 policy-apex-pdp | [2024-03-15T23:14:23.445+00:00|INFO|ServiceManager|main] service manager starting set alive grafana | logger=migrator t=2024-03-15T23:13:50.434172937Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-db-migrator | policy-pap | [2024-03-15T23:14:21.594+00:00|INFO|ServiceManager|main] Policy PAP starting kafka | producer.id.expiration.ms = 86400000 policy-apex-pdp | [2024-03-15T23:14:23.446+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object grafana | logger=migrator t=2024-03-15T23:13:50.436591365Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=2.418748ms policy-db-migrator | policy-pap | [2024-03-15T23:14:21.594+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry kafka | producer.purgatory.purge.interval.requests = 1000 policy-apex-pdp | [2024-03-15T23:14:23.448+00:00|INFO|ServiceManager|main] service manager starting topic sinks grafana | logger=migrator t=2024-03-15T23:13:50.444244921Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-pap | [2024-03-15T23:14:21.595+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters kafka | queued.max.request.bytes = -1 policy-apex-pdp | [2024-03-15T23:14:23.448+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher grafana | logger=migrator t=2024-03-15T23:13:50.444848421Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=612.85µs policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.595+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener kafka | queued.max.requests = 500 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener grafana | logger=migrator t=2024-03-15T23:13:50.451845426Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-pap | [2024-03-15T23:14:21.595+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher kafka | quota.window.num = 11 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher grafana | logger=migrator t=2024-03-15T23:13:50.45260087Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=761.504µs policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.596+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher kafka | quota.window.size.seconds = 1 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher grafana | logger=migrator t=2024-03-15T23:13:50.455845245Z level=info msg="Executing migration" id="create alert_notification table v1" policy-db-migrator | policy-pap | [2024-03-15T23:14:21.596+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f21b508-fe17-4ab8-9275-1762b58c9ac3, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 grafana | logger=migrator t=2024-03-15T23:13:50.456687632Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=842.197µs policy-db-migrator | policy-pap | [2024-03-15T23:14:21.601+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a833d76c-6968-4ee8-9b4d-b3fefbf07611, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1755aee6 kafka | remote.log.manager.task.interval.ms = 30000 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=2f21b508-fe17-4ab8-9275-1762b58c9ac3, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted grafana | logger=migrator t=2024-03-15T23:13:50.461460995Z level=info msg="Executing migration" id="Add column is_default" policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-pap | [2024-03-15T23:14:21.614+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a833d76c-6968-4ee8-9b4d-b3fefbf07611, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-apex-pdp | [2024-03-15T23:14:23.450+00:00|INFO|ServiceManager|main] service manager starting Create REST server grafana | logger=migrator t=2024-03-15T23:13:50.465391512Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.934027ms policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.615+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 grafana | logger=migrator t=2024-03-15T23:13:50.468840773Z level=info msg="Executing migration" id="Add column frequency" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | allow.auto.create.topics = true policy-apex-pdp | [2024-03-15T23:14:23.469+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: kafka | remote.log.manager.thread.pool.size = 10 grafana | logger=migrator t=2024-03-15T23:13:50.47278453Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.943667ms policy-db-migrator | -------------- policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | [] kafka | remote.log.metadata.custom.metadata.max.bytes = 128 grafana | logger=migrator t=2024-03-15T23:13:50.478413671Z level=info msg="Executing migration" id="Add column send_reminder" policy-db-migrator | policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | [2024-03-15T23:14:23.472+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager grafana | logger=migrator t=2024-03-15T23:13:50.482140381Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.72456ms policy-db-migrator | policy-pap | auto.offset.reset = latest policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"138adf8a-85b2-4615-8a26-a9d5f452bbb8","timestampMs":1710544463450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} kafka | remote.log.metadata.manager.class.path = null grafana | logger=migrator t=2024-03-15T23:13:50.487337188Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|ServiceManager|main] service manager starting Rest Server kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. grafana | logger=migrator t=2024-03-15T23:13:50.489857289Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.520001ms policy-db-migrator | -------------- policy-pap | check.crcs = true policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|ServiceManager|main] service manager starting kafka | remote.log.metadata.manager.listener.name = null grafana | logger=migrator t=2024-03-15T23:13:50.493333091Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | remote.log.reader.max.pending.tasks = 100 grafana | logger=migrator t=2024-03-15T23:13:50.494295942Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=962.101µs policy-pap | client.id = consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3 policy-apex-pdp | [2024-03-15T23:14:23.649+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- kafka | remote.log.reader.threads = 10 grafana | logger=migrator t=2024-03-15T23:13:50.497561597Z level=info msg="Executing migration" id="Update alert table charset" policy-pap | client.rack = policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | kafka | remote.log.storage.manager.class.name = null grafana | logger=migrator t=2024-03-15T23:13:50.497589828Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=28.981µs policy-pap | connections.max.idle.ms = 540000 policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | kafka | remote.log.storage.manager.class.path = null grafana | logger=migrator t=2024-03-15T23:13:50.502564158Z level=info msg="Executing migration" id="Update alert_notification table charset" policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | remote.log.storage.manager.impl.prefix = rsm.config. grafana | logger=migrator t=2024-03-15T23:13:50.502590299Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=27.161µs policy-pap | enable.auto.commit = true policy-db-migrator | -------------- kafka | remote.log.storage.system.enable = false policy-apex-pdp | [2024-03-15T23:14:23.659+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-03-15T23:13:50.506850326Z level=info msg="Executing migration" id="create notification_journal table v1" policy-pap | exclude.internal.topics = true policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) kafka | replica.fetch.backoff.ms = 1000 policy-apex-pdp | [2024-03-15T23:14:23.802+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: LbZnmjPNTK-gKtiXPvevcA grafana | logger=migrator t=2024-03-15T23:13:50.509917305Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=3.069569ms policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | -------------- kafka | replica.fetch.max.bytes = 1048576 policy-apex-pdp | [2024-03-15T23:14:23.802+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Cluster ID: LbZnmjPNTK-gKtiXPvevcA grafana | logger=migrator t=2024-03-15T23:13:50.513626574Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | kafka | replica.fetch.min.bytes = 1 policy-apex-pdp | [2024-03-15T23:14:23.804+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-03-15T23:14:23.804+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | fetch.min.bytes = 1 policy-db-migrator | kafka | replica.fetch.response.max.bytes = 10485760 policy-apex-pdp | [2024-03-15T23:14:23.811+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] (Re-)joining group grafana | logger=migrator t=2024-03-15T23:13:50.514752731Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.123957ms policy-pap | group.id = a833d76c-6968-4ee8-9b4d-b3fefbf07611 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql kafka | replica.fetch.wait.max.ms = 500 policy-apex-pdp | [2024-03-15T23:14:23.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Request joining group due to: need to re-join with the given member-id: consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 grafana | logger=migrator t=2024-03-15T23:13:50.517951064Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-pap | group.instance.id = null policy-db-migrator | -------------- kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-apex-pdp | [2024-03-15T23:14:23.842+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) grafana | logger=migrator t=2024-03-15T23:13:50.518701248Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=749.324µs policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) kafka | replica.lag.time.max.ms = 30000 policy-apex-pdp | [2024-03-15T23:14:23.843+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] (Re-)joining group grafana | logger=migrator t=2024-03-15T23:13:50.523968037Z level=info msg="Executing migration" id="create alert_notification_state table v1" policy-pap | interceptor.classes = [] policy-db-migrator | -------------- kafka | replica.selector.class = null policy-apex-pdp | [2024-03-15T23:14:24.283+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-03-15T23:13:50.524558856Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=593.249µs policy-db-migrator | kafka | replica.socket.receive.buffer.bytes = 65536 policy-apex-pdp | [2024-03-15T23:14:24.284+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-03-15T23:13:50.527188441Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-db-migrator | kafka | replica.socket.timeout.ms = 30000 policy-apex-pdp | [2024-03-15T23:14:26.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Successfully joined group with generation Generation{generationId=1, memberId='consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70', protocol='range'} policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-03-15T23:13:50.528015948Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=827.167µs policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | replication.quota.window.num = 11 policy-apex-pdp | [2024-03-15T23:14:26.865+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Finished assignment for group at generation 1: {consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-03-15T23:13:50.53026828Z level=info msg="Executing migration" id="Add for to alert table" policy-db-migrator | -------------- kafka | replication.quota.window.size.seconds = 1 policy-apex-pdp | [2024-03-15T23:14:26.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Successfully synced group in generation Generation{generationId=1, memberId='consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70', protocol='range'} policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-03-15T23:13:50.536929034Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=6.661544ms policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) kafka | request.timeout.ms = 30000 policy-apex-pdp | [2024-03-15T23:14:26.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-03-15T23:13:50.541851123Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-db-migrator | -------------- kafka | reserved.broker.max.id = 1000 policy-apex-pdp | [2024-03-15T23:14:26.878+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-03-15T23:13:50.545359156Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.508783ms policy-db-migrator | kafka | sasl.client.callback.handler.class = null policy-apex-pdp | [2024-03-15T23:14:26.886+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Found no committed offset for partition policy-pdp-pap-0 policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-03-15T23:13:50.54861153Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-db-migrator | kafka | sasl.enabled.mechanisms = [GSSAPI] policy-apex-pdp | [2024-03-15T23:14:26.897+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2, groupId=2f21b508-fe17-4ab8-9275-1762b58c9ac3] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-03-15T23:13:50.548765715Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=156.705µs policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | sasl.jaas.config = null policy-apex-pdp | [2024-03-15T23:14:43.450+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-03-15T23:13:50.552234257Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-db-migrator | -------------- kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-03-15T23:13:50.552870098Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=635.28µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | [2024-03-15T23:14:43.475+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:50.557084283Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-db-migrator | -------------- kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-03-15T23:13:50.557949291Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=861.998µs policy-db-migrator | kafka | sasl.kerberos.service.name = null policy-apex-pdp | [2024-03-15T23:14:43.479+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-03-15T23:13:50.561250957Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-db-migrator | kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | [2024-03-15T23:14:43.639+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-03-15T23:13:50.567183418Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.927141ms policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-03-15T23:13:50.57190503Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-db-migrator | -------------- kafka | sasl.login.callback.handler.class = null policy-apex-pdp | [2024-03-15T23:14:43.657+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:50.571994343Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=89.923µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) kafka | sasl.login.class = null policy-apex-pdp | [2024-03-15T23:14:43.657+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:50.576269131Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-db-migrator | -------------- kafka | sasl.login.connect.timeout.ms = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-03-15T23:13:50.577124448Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=855.757µs policy-db-migrator | kafka | sasl.login.read.timeout.ms = null policy-apex-pdp | [2024-03-15T23:14:43.663+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-03-15T23:13:50.581084866Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-db-migrator | kafka | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-03-15T23:13:50.582057537Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=972.601µs policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | [2024-03-15T23:14:43.677+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-03-15T23:13:50.585613211Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-db-migrator | -------------- kafka | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-03-15T23:13:50.585722455Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=109.544µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) kafka | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | [2024-03-15T23:14:43.677+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-03-15T23:13:50.590260511Z level=info msg="Executing migration" id="create annotation table v5" policy-db-migrator | -------------- kafka | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-03-15T23:14:43.685+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-03-15T23:13:50.59178979Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.531849ms policy-db-migrator | kafka | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.login.callback.handler.class = null policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:50.595083906Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-db-migrator | kafka | sasl.mechanism.controller.protocol = GSSAPI policy-pap | sasl.login.class = null policy-apex-pdp | [2024-03-15T23:14:43.685+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-03-15T23:13:50.596296135Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.212859ms policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-pap | sasl.login.connect.timeout.ms = null policy-apex-pdp | [2024-03-15T23:14:43.722+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:50.599451117Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-db-migrator | -------------- kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.login.read.timeout.ms = null policy-apex-pdp | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:50.600251983Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=800.485µs policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) kafka | sasl.oauthbearer.expected.audience = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | [2024-03-15T23:14:43.724+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:50.604443727Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-db-migrator | -------------- kafka | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:50.605335006Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=891.199µs policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | [2024-03-15T23:14:43.732+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:50.608782597Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:50.609979756Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.196149ms policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-03-15T23:14:43.733+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-03-15T23:13:50.612974592Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-db-migrator | -------------- kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | [2024-03-15T23:14:43.779+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:50.614076487Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.100765ms policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) kafka | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:50.618350065Z level=info msg="Executing migration" id="Update annotation table charset" policy-db-migrator | -------------- kafka | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | [2024-03-15T23:14:43.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:50.618393256Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=44.291µs policy-db-migrator | kafka | sasl.oauthbearer.token.endpoint.url = null policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:50.622190009Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-db-migrator | kafka | sasl.server.callback.handler.class = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | [2024-03-15T23:14:43.791+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:50.626440345Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.255067ms policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | sasl.server.max.receive.size = 524288 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:50.631428886Z level=info msg="Executing migration" id="Drop category_id index" policy-db-migrator | -------------- kafka | security.inter.broker.protocol = PLAINTEXT policy-apex-pdp | [2024-03-15T23:14:43.791+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-03-15T23:13:50.632519971Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.091615ms policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | security.providers = null policy-apex-pdp | [2024-03-15T23:14:56.164+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.2 - policyadmin [15/Mar/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10648 "-" "Prometheus/2.50.1" grafana | logger=migrator t=2024-03-15T23:13:50.635816617Z level=info msg="Executing migration" id="Add column tags to annotation table" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | server.max.startup.time.ms = 9223372036854775807 policy-apex-pdp | [2024-03-15T23:15:56.083+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.2 - policyadmin [15/Mar/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.50.1" grafana | logger=migrator t=2024-03-15T23:13:50.642809362Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.992595ms policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:50.647452582Z level=info msg="Executing migration" id="Create annotation_tag table v2" policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:50.647988369Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=535.847µs policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | socket.listen.backlog.size = 50 grafana | logger=migrator t=2024-03-15T23:13:50.651017526Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | socket.receive.buffer.bytes = 102400 grafana | logger=migrator t=2024-03-15T23:13:50.651618706Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=600.88µs policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-pap | security.protocol = PLAINTEXT kafka | socket.request.max.bytes = 104857600 grafana | logger=migrator t=2024-03-15T23:13:50.655625185Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" policy-db-migrator | -------------- policy-pap | security.providers = null kafka | socket.send.buffer.bytes = 102400 grafana | logger=migrator t=2024-03-15T23:13:50.65734373Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.715975ms policy-db-migrator | policy-pap | send.buffer.bytes = 131072 kafka | ssl.cipher.suites = [] grafana | logger=migrator t=2024-03-15T23:13:50.662750654Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-db-migrator | policy-pap | session.timeout.ms = 45000 kafka | ssl.client.auth = none grafana | logger=migrator t=2024-03-15T23:13:50.674175782Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.426338ms policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-03-15T23:13:50.678500991Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-03-15T23:13:50.679194623Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=693.302µs policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-pap | ssl.cipher.suites = null kafka | ssl.engine.factory.class = null grafana | logger=migrator t=2024-03-15T23:13:50.682735667Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.key.password = null grafana | logger=migrator t=2024-03-15T23:13:50.683880034Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.144457ms policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https kafka | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-03-15T23:13:50.693027329Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" policy-db-migrator | policy-pap | ssl.engine.factory.class = null kafka | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-03-15T23:13:50.693553125Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=532.037µs policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | ssl.key.password = null kafka | ssl.keystore.key = null grafana | logger=migrator t=2024-03-15T23:13:50.699086364Z level=info msg="Executing migration" id="drop table annotation_tag_v2" policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:50.699786426Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=699.853µs policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-pap | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.location = null grafana | logger=migrator t=2024-03-15T23:13:50.704213139Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null kafka | ssl.keystore.password = null grafana | logger=migrator t=2024-03-15T23:13:50.704624682Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=411.054µs policy-db-migrator | policy-pap | ssl.keystore.key = null kafka | ssl.keystore.type = JKS grafana | logger=migrator t=2024-03-15T23:13:50.716813864Z level=info msg="Executing migration" id="Add created time to annotation table" policy-db-migrator | policy-pap | ssl.keystore.location = null kafka | ssl.principal.mapping.rules = DEFAULT grafana | logger=migrator t=2024-03-15T23:13:50.721639269Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.826215ms policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-03-15T23:13:50.725315388Z level=info msg="Executing migration" id="Add updated time to annotation table" policy-pap | ssl.keystore.password = null policy-db-migrator | -------------- kafka | ssl.provider = null grafana | logger=migrator t=2024-03-15T23:13:50.729884735Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.568847ms policy-pap | ssl.keystore.type = JKS policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-03-15T23:13:50.734346308Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- kafka | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-03-15T23:13:50.735387192Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.041034ms policy-pap | ssl.provider = null policy-db-migrator | kafka | ssl.truststore.certificates = null grafana | logger=migrator t=2024-03-15T23:13:50.741676204Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-pap | ssl.secure.random.implementation = null kafka | ssl.truststore.location = null grafana | logger=migrator t=2024-03-15T23:13:50.742711478Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.035164ms policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.password = null grafana | logger=migrator t=2024-03-15T23:13:50.746453458Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-03-15T23:13:50.746771308Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=317.47µs policy-db-migrator | -------------- policy-pap | ssl.truststore.location = null kafka | ssl.truststore.type = JKS grafana | logger=migrator t=2024-03-15T23:13:50.751890883Z level=info msg="Executing migration" id="Add epoch_end column" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.truststore.password = null kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:50.758467105Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.570881ms policy-db-migrator | -------------- policy-pap | ssl.truststore.type = JKS kafka | transaction.max.timeout.ms = 900000 grafana | logger=migrator t=2024-03-15T23:13:50.763042392Z level=info msg="Executing migration" id="Add index for epoch_end" policy-db-migrator | policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | transaction.partition.verification.enable = true grafana | logger=migrator t=2024-03-15T23:13:50.764093926Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.051784ms policy-db-migrator | policy-pap | kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 grafana | logger=migrator t=2024-03-15T23:13:50.767678201Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | transaction.state.log.load.buffer.size = 5242880 grafana | logger=migrator t=2024-03-15T23:13:50.76796382Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=282.549µs policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | transaction.state.log.min.isr = 2 grafana | logger=migrator t=2024-03-15T23:13:50.772705213Z level=info msg="Executing migration" id="Move region to single row" policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461622 kafka | transaction.state.log.num.partitions = 50 grafana | logger=migrator t=2024-03-15T23:13:50.773622683Z level=info msg="Migration successfully executed" id="Move region to single row" duration=916.579µs policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.622+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Subscribed to topic(s): policy-pdp-pap kafka | transaction.state.log.replication.factor = 3 grafana | logger=migrator t=2024-03-15T23:13:50.778965174Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-db-migrator | policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | transaction.state.log.segment.bytes = 104857600 grafana | logger=migrator t=2024-03-15T23:13:50.780010688Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.045384ms policy-db-migrator | policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e833a44a-4d39-4a1d-8bf3-bd02ef013e96, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@17ebbf1e kafka | transactional.id.expiration.ms = 604800000 grafana | logger=migrator t=2024-03-15T23:13:50.785275238Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e833a44a-4d39-4a1d-8bf3-bd02ef013e96, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | unclean.leader.election.enable = false grafana | logger=migrator t=2024-03-15T23:13:50.786656922Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.377505ms policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.623+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | unstable.api.versions.enable = false grafana | logger=migrator t=2024-03-15T23:13:50.791892491Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | allow.auto.create.topics = true kafka | zookeeper.clientCnxnSocket = null grafana | logger=migrator t=2024-03-15T23:13:50.793394749Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.501669ms policy-db-migrator | -------------- policy-pap | auto.commit.interval.ms = 5000 kafka | zookeeper.connect = zookeeper:2181 grafana | logger=migrator t=2024-03-15T23:13:50.798375829Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | zookeeper.connection.timeout.ms = null grafana | logger=migrator t=2024-03-15T23:13:50.799550137Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.174268ms policy-db-migrator | policy-pap | auto.offset.reset = latest kafka | zookeeper.max.in.flight.requests = 10 grafana | logger=migrator t=2024-03-15T23:13:50.803559596Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | bootstrap.servers = [kafka:9092] kafka | zookeeper.metadata.migration.enable = false grafana | logger=migrator t=2024-03-15T23:13:50.804947911Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.388765ms policy-db-migrator | -------------- policy-pap | check.crcs = true kafka | zookeeper.session.timeout.ms = 18000 grafana | logger=migrator t=2024-03-15T23:13:50.809987803Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | client.dns.lookup = use_all_dns_ips kafka | zookeeper.set.acl = false grafana | logger=migrator t=2024-03-15T23:13:50.81143905Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.450897ms policy-db-migrator | -------------- policy-pap | client.id = consumer-policy-pap-4 kafka | zookeeper.ssl.cipher.suites = null grafana | logger=migrator t=2024-03-15T23:13:50.818522178Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-db-migrator | policy-pap | client.rack = kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-03-15T23:13:50.818679673Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=157.935µs policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-03-15T23:13:50.82358133Z level=info msg="Executing migration" id="create test_data table" policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | default.api.timeout.ms = 60000 kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-03-15T23:13:50.825183652Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.601422ms policy-db-migrator | -------------- policy-pap | enable.auto.commit = true kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS grafana | logger=migrator t=2024-03-15T23:13:50.830775232Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | zookeeper.ssl.keystore.location = null grafana | logger=migrator t=2024-03-15T23:13:50.831694372Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=918.75µs policy-pap | exclude.internal.topics = true policy-db-migrator | -------------- kafka | zookeeper.ssl.keystore.password = null grafana | logger=migrator t=2024-03-15T23:13:50.835861496Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | kafka | zookeeper.ssl.keystore.type = null grafana | logger=migrator t=2024-03-15T23:13:50.836896649Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.031283ms policy-pap | fetch.max.wait.ms = 500 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.840738443Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-pap | fetch.min.bytes = 1 kafka | zookeeper.ssl.ocsp.enable = false policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql grafana | logger=migrator t=2024-03-15T23:13:50.841804087Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.066194ms policy-pap | group.id = policy-pap kafka | zookeeper.ssl.protocol = TLSv1.2 policy-db-migrator | -------------- policy-pap | group.instance.id = null grafana | logger=migrator t=2024-03-15T23:13:50.846268221Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" kafka | zookeeper.ssl.truststore.location = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-03-15T23:13:50.84655272Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=284.489µs kafka | zookeeper.ssl.truststore.password = null policy-db-migrator | -------------- policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-03-15T23:13:50.84997752Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" kafka | zookeeper.ssl.truststore.type = null policy-db-migrator | policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-03-15T23:13:50.850428595Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=450.854µs kafka | (kafka.server.KafkaConfig) policy-db-migrator | policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-03-15T23:13:50.853896676Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" kafka | [2024-03-15 23:13:53,437] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-03-15T23:13:50.854053001Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=156.145µs kafka | [2024-03-15 23:13:53,437] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | -------------- policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-03-15T23:13:50.85900046Z level=info msg="Executing migration" id="create team table" kafka | [2024-03-15 23:13:53,438] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-03-15T23:13:50.860339884Z level=info msg="Migration successfully executed" id="create team table" duration=1.339313ms kafka | [2024-03-15 23:13:53,441] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-03-15T23:13:50.866605535Z level=info msg="Executing migration" id="add index team.org_id" kafka | [2024-03-15 23:13:53,468] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-db-migrator | policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-03-15T23:13:50.868247598Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.641773ms kafka | [2024-03-15 23:13:53,472] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-db-migrator | policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-03-15T23:13:50.872526676Z level=info msg="Executing migration" id="add unique index team_org_id_name" kafka | [2024-03-15 23:13:53,480] INFO Loaded 0 logs in 12ms (kafka.log.LogManager) policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-03-15T23:13:50.874051295Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.524759ms kafka | [2024-03-15 23:13:53,481] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-03-15T23:13:50.879163109Z level=info msg="Executing migration" id="Add column uid in team" kafka | [2024-03-15 23:13:53,482] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-03-15T23:13:50.883140887Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.982458ms kafka | [2024-03-15 23:13:53,493] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-db-migrator | -------------- policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:50.889344517Z level=info msg="Executing migration" id="Update uid column values in team" kafka | [2024-03-15 23:13:53,536] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-db-migrator | policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-03-15T23:13:50.889686908Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=342.991µs kafka | [2024-03-15 23:13:53,567] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-db-migrator | policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-03-15T23:13:50.893385947Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" kafka | [2024-03-15 23:13:53,582] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-03-15T23:13:50.895095942Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.709265ms kafka | [2024-03-15 23:13:53,608] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-03-15T23:13:50.90063327Z level=info msg="Executing migration" id="create team member table" kafka | [2024-03-15 23:13:53,995] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:50.901979174Z level=info msg="Migration successfully executed" id="create team member table" duration=1.346034ms kafka | [2024-03-15 23:13:54,014] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:50.907996827Z level=info msg="Executing migration" id="add index team_member.org_id" kafka | [2024-03-15 23:13:54,014] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-03-15T23:13:50.909681962Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.682865ms kafka | [2024-03-15 23:13:54,020] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:50.913175874Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-pap | sasl.jaas.config = null kafka | [2024-03-15 23:13:54,024] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-03-15T23:13:50.914204047Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.027273ms policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-03-15 23:13:54,050] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-03-15T23:13:50.917852255Z level=info msg="Executing migration" id="add index team_member.team_id" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-03-15 23:13:54,053] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-03-15T23:13:50.918921129Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.068794ms policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-03-15 23:13:54,055] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-03-15T23:13:50.92298831Z level=info msg="Executing migration" id="Add column email to team table" policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-03-15 23:13:54,058] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-03-15T23:13:50.927911028Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.922458ms policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-03-15 23:13:54,059] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-03-15T23:13:50.936281358Z level=info msg="Executing migration" id="Add column external to team_member table" policy-db-migrator | > upgrade 0100-pdp.sql policy-pap | sasl.login.callback.handler.class = null kafka | [2024-03-15 23:13:54,074] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-03-15T23:13:50.942985764Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.705785ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,082] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-03-15T23:13:50.94598114Z level=info msg="Executing migration" id="Add column permission to team_member table" policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-03-15 23:13:54,114] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-03-15T23:13:50.95094675Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.922938ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,140] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1710544434129,1710544434129,1,0,0,72057608227586049,258,0,27 policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-03-15T23:13:50.95437457Z level=info msg="Executing migration" id="create dashboard acl table" policy-db-migrator | kafka | (kafka.zk.KafkaZkClient) policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-03-15T23:13:50.955436214Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.063184ms policy-db-migrator | kafka | [2024-03-15 23:13:54,142] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-03-15T23:13:50.961622743Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-03-15 23:13:54,197] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-03-15T23:13:50.962720809Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.097666ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,204] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:50.965790168Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-03-15 23:13:54,211] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:50.967679898Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.887751ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,213] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-03-15T23:13:50.976526073Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" policy-db-migrator | kafka | [2024-03-15 23:13:54,223] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-03-15T23:13:50.977532215Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.005482ms policy-db-migrator | kafka | [2024-03-15 23:13:54,233] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-03-15T23:13:50.982290649Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql kafka | [2024-03-15 23:13:54,237] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-03-15T23:13:50.983346253Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.055574ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,243] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-03-15T23:13:50.987854668Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-03-15 23:13:54,244] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:50.989539312Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.687184ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,248] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:50.995270796Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" policy-db-migrator | kafka | [2024-03-15 23:13:54,267] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-03-15T23:13:50.996382082Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.113026ms policy-db-migrator | kafka | [2024-03-15 23:13:54,272] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-03-15T23:13:50.999636957Z level=info msg="Executing migration" id="add index dashboard_permission" policy-db-migrator | > upgrade 0130-pdpstatistics.sql kafka | [2024-03-15 23:13:54,273] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-03-15T23:13:51.001167326Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.528679ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,282] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-03-15T23:13:51.004758111Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL kafka | [2024-03-15 23:13:54,286] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-03-15T23:13:51.005574278Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=815.856µs policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,291] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-pap | security.providers = null policy-db-migrator | kafka | [2024-03-15 23:13:54,294] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-03-15T23:13:51.010879748Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" policy-db-migrator | kafka | [2024-03-15 23:13:54,296] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-pap | session.timeout.ms = 45000 grafana | logger=migrator t=2024-03-15T23:13:51.011325102Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=444.614µs policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql kafka | [2024-03-15 23:13:54,312] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:51.015890688Z level=info msg="Executing migration" id="create tag table" kafka | [2024-03-15 23:13:54,316] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:51.016735245Z level=info msg="Migration successfully executed" id="create tag table" duration=844.637µs kafka | [2024-03-15 23:13:54,316] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-03-15T23:13:51.022683666Z level=info msg="Executing migration" id="add index tag.key_value" kafka | [2024-03-15 23:13:54,324] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-db-migrator | policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-03-15T23:13:51.024401471Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.717575ms policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-03-15T23:13:51.030379743Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-03-15 23:13:54,332] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-03-15T23:13:51.031303493Z level=info msg="Migration successfully executed" id="create login attempt table" duration=922.729µs kafka | [2024-03-15 23:13:54,334] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-db-migrator | -------------- policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-03-15T23:13:51.034658Z level=info msg="Executing migration" id="add index login_attempt.username" policy-db-migrator | kafka | [2024-03-15 23:13:54,334] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-03-15T23:13:51.035604591Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=946.44µs policy-db-migrator | kafka | [2024-03-15 23:13:54,334] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-03-15T23:13:51.038911637Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-db-migrator | > upgrade 0150-pdpstatistics.sql kafka | [2024-03-15 23:13:54,335] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-03-15T23:13:51.039880938Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=968.932µs policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,337] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-03-15T23:13:51.045182478Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-03-15 23:13:54,337] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-03-15T23:13:51.060452397Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.269739ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,338] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-03-15T23:13:51.068713672Z level=info msg="Executing migration" id="create login_attempt v2" policy-db-migrator | kafka | [2024-03-15 23:13:54,338] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-03-15T23:13:51.070033115Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.315572ms policy-db-migrator | kafka | [2024-03-15 23:13:54,339] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-03-15T23:13:51.073601029Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql kafka | [2024-03-15 23:13:54,341] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-03-15T23:13:51.076607735Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=3.005166ms policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,349] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-03-15T23:13:51.082888227Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME kafka | [2024-03-15 23:13:54,350] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-03-15T23:13:51.08329983Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=408.923µs policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,353] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-03-15T23:13:51.08734113Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-db-migrator | kafka | [2024-03-15 23:13:54,354] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-03-15T23:13:51.088408114Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.072465ms policy-db-migrator | kafka | [2024-03-15 23:13:54,354] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql kafka | [2024-03-15 23:13:54,354] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-03-15T23:13:51.093555249Z level=info msg="Executing migration" id="create user auth table" policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,355] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | grafana | logger=migrator t=2024-03-15T23:13:51.094914972Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.359553ms policy-db-migrator | UPDATE jpapdpstatistics_enginestats a kafka | [2024-03-15 23:13:54,357] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-pap | [2024-03-15T23:14:21.627+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 grafana | logger=migrator t=2024-03-15T23:13:51.100505782Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-db-migrator | JOIN pdpstatistics b kafka | [2024-03-15 23:13:54,358] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | [2024-03-15T23:14:21.627+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 grafana | logger=migrator t=2024-03-15T23:13:51.10170022Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.192358ms policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp kafka | [2024-03-15 23:13:54,364] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-pap | [2024-03-15T23:14:21.627+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461627 grafana | logger=migrator t=2024-03-15T23:13:51.106627128Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-db-migrator | SET a.id = b.id kafka | [2024-03-15 23:13:54,365] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-03-15T23:13:51.106748462Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=121.904µs policy-db-migrator | -------------- kafka | [2024-03-15 23:13:54,366] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|ServiceManager|main] Policy PAP starting topics grafana | logger=migrator t=2024-03-15T23:13:51.112248268Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-db-migrator | kafka | [2024-03-15 23:13:54,366] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) grafana | logger=migrator t=2024-03-15T23:13:51.120786392Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.536534ms policy-db-migrator | policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=e833a44a-4d39-4a1d-8bf3-bd02ef013e96, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-03-15 23:13:54,366] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-03-15T23:13:51.126188545Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a833d76c-6968-4ee8-9b4d-b3fefbf07611, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-03-15 23:13:54,367] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) grafana | logger=migrator t=2024-03-15T23:13:51.131855987Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.666042ms policy-db-migrator | -------------- policy-pap | [2024-03-15T23:14:21.628+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ad60098f-8467-4f6a-8a6c-235480b406c4, alive=false, publisher=null]]: starting kafka | [2024-03-15 23:13:54,367] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) grafana | logger=migrator t=2024-03-15T23:13:51.137274491Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-pap | [2024-03-15T23:14:21.647+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-03-15 23:13:54,370] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-03-15T23:13:51.14287755Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.602899ms policy-db-migrator | -------------- policy-pap | acks = -1 kafka | [2024-03-15 23:13:54,371] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.7:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) grafana | logger=migrator t=2024-03-15T23:13:51.148252723Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | [2024-03-15 23:13:54,372] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-03-15T23:13:51.154151072Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.901919ms policy-db-migrator | policy-pap | batch.size = 16384 kafka | [2024-03-15 23:13:54,379] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-03-15T23:13:51.157744027Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-pap | bootstrap.servers = [kafka:9092] kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. grafana | logger=migrator t=2024-03-15T23:13:51.158857923Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.113516ms policy-db-migrator | -------------- policy-pap | buffer.memory = 33554432 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) grafana | logger=migrator t=2024-03-15T23:13:51.163586144Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-pap | client.dns.lookup = use_all_dns_ips kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) grafana | logger=migrator t=2024-03-15T23:13:51.172118858Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.527604ms policy-db-migrator | -------------- policy-pap | client.id = producer-1 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) grafana | logger=migrator t=2024-03-15T23:13:51.178363708Z level=info msg="Executing migration" id="create server_lock table" policy-db-migrator | policy-pap | compression.type = none kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) grafana | logger=migrator t=2024-03-15T23:13:51.179146113Z level=info msg="Migration successfully executed" id="create server_lock table" duration=781.125µs policy-db-migrator | policy-pap | connections.max.idle.ms = 540000 kafka | [2024-03-15 23:13:54,384] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) grafana | logger=migrator t=2024-03-15T23:13:51.18278568Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-pap | delivery.timeout.ms = 120000 kafka | [2024-03-15 23:13:54,388] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-03-15T23:13:51.184544956Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.759766ms policy-db-migrator | -------------- policy-pap | enable.idempotence = true kafka | [2024-03-15 23:13:54,388] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) grafana | logger=migrator t=2024-03-15T23:13:51.189467694Z level=info msg="Executing migration" id="create user auth token table" policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-pap | interceptor.classes = [] kafka | [2024-03-15 23:13:54,388] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-03-15T23:13:51.191091666Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.626872ms policy-db-migrator | -------------- policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-03-15 23:13:54,388] INFO Kafka startTimeMs: 1710544434380 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-03-15T23:13:51.201027825Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-db-migrator | policy-pap | linger.ms = 0 kafka | [2024-03-15 23:13:54,390] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) grafana | logger=migrator t=2024-03-15T23:13:51.202291046Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.26203ms policy-db-migrator | policy-pap | max.block.ms = 60000 kafka | [2024-03-15 23:13:54,495] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-03-15T23:13:51.206527581Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-db-migrator | > upgrade 0210-sequence.sql policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-03-15 23:13:54,556] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.209493666Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.963565ms policy-db-migrator | -------------- policy-pap | max.request.size = 1048576 kafka | [2024-03-15 23:13:54,632] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-03-15T23:13:51.214115805Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | metadata.max.age.ms = 300000 kafka | [2024-03-15 23:13:54,634] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-03-15T23:13:51.216044957Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.935242ms policy-db-migrator | -------------- policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-03-15 23:13:59,390] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) grafana | logger=migrator t=2024-03-15T23:13:51.223544007Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-db-migrator | policy-pap | metric.reporters = [] kafka | [2024-03-15 23:13:59,390] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) grafana | logger=migrator t=2024-03-15T23:13:51.22957702Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.026923ms policy-db-migrator | policy-pap | metrics.num.samples = 2 kafka | [2024-03-15 23:14:22,183] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-03-15T23:13:51.233094333Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-db-migrator | > upgrade 0220-sequence.sql policy-pap | metrics.recording.level = INFO kafka | [2024-03-15 23:14:22,191] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.234263401Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.167758ms policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-03-15 23:14:22,195] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-03-15T23:13:51.240326915Z level=info msg="Executing migration" id="create cache_data table" policy-pap | partitioner.adaptive.partitioning.enable = true kafka | [2024-03-15 23:14:22,203] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.2414024Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.074915ms policy-pap | partitioner.availability.timeout.ms = 0 kafka | [2024-03-15 23:14:22,237] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(RYQK08lOSYaXD4Alb86gyg),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(R2o1IzsbR_ucSKqMoC8FrA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.248750955Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-pap | partitioner.class = null kafka | [2024-03-15 23:14:22,247] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.250107239Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.375124ms policy-pap | partitioner.ignore.keys = false kafka | [2024-03-15 23:14:22,253] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-03-15T23:13:51.2545119Z level=info msg="Executing migration" id="create short_url table v1" policy-pap | receive.buffer.bytes = 32768 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.255506202Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=994.562µs policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-03-15T23:13:51.263427666Z level=info msg="Executing migration" id="add index short_url.org_id-uid" policy-pap | reconnect.backoff.ms = 50 kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.264697997Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.278041ms policy-pap | request.timeout.ms = 30000 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.268685715Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" policy-pap | retries = 2147483647 policy-db-migrator | kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.268781478Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=96.033µs policy-pap | retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.271939239Z level=info msg="Executing migration" id="delete alert_definition table" policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql kafka | [2024-03-15 23:14:22,254] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.272116275Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=177.275µs policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.284581374Z level=info msg="Executing migration" id="recreate alert_definition table" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.286441564Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.86545ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.293929764Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-pap | sasl.kerberos.service.name = null policy-db-migrator | kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.294970647Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.041113ms policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | kafka | [2024-03-15 23:14:22,255] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.298253533Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | > upgrade 0120-toscatrigger.sql kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.299599086Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.349353ms policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.302822189Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | sasl.login.class = null policy-db-migrator | DROP TABLE IF EXISTS toscatrigger kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.302976494Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=155.325µs policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.309872315Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.311846179Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.969983ms policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.317532931Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql kafka | [2024-03-15 23:14:22,257] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.31905745Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.524739ms policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.324583107Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.326287742Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.701644ms policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.33339436Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.334949339Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.55553ms policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | kafka | [2024-03-15 23:14:22,258] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.340452346Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | > upgrade 0140-toscaparameter.sql kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.353830405Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=13.378469ms policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.360147487Z level=info msg="Executing migration" id="drop alert_definition table" policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | DROP TABLE IF EXISTS toscaparameter kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.361031646Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=884.239µs policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.370141998Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.370272402Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=131.244µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.374822898Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0150-toscaproperty.sql kafka | [2024-03-15 23:14:22,259] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.376383698Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.56164ms policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.380044225Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.381171582Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.126936ms policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.389923432Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-pap | security.protocol = PLAINTEXT policy-db-migrator | kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.390907834Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=983.892µs policy-pap | security.providers = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.397076772Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-pap | send.buffer.bytes = 131072 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.397181725Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=103.254µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,260] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.402132314Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | kafka | [2024-03-15 23:14:22,261] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.403501168Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.368943ms policy-pap | ssl.cipher.suites = null kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.416596268Z level=info msg="Executing migration" id="create alert_instance table" policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.417726174Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.129517ms policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.424918414Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.425965908Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.047034ms policy-pap | ssl.key.password = null kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.435598117Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.436674021Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.077434ms policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.4447261Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-pap | ssl.keystore.key = null policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.452440617Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=7.743779ms policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.457356995Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-pap | ssl.keystore.password = null policy-db-migrator | kafka | [2024-03-15 23:14:22,263] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.458636546Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.282212ms policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,264] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.463202642Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-03-15 23:14:22,264] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.464332608Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.130246ms policy-pap | ssl.provider = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.469004888Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | ssl.secure.random.implementation = null policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.498525385Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=29.512656ms policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.506973526Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | ssl.truststore.certificates = null policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.528666521Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.690846ms policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.534506068Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | ssl.truststore.password = null policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.535338745Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=832.767µs policy-pap | ssl.truststore.type = JKS policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.540912814Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.541876065Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=963.081µs policy-pap | transactional.id = null policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.547230016Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.552983361Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.749965ms policy-pap | policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.561792103Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" policy-pap | [2024-03-15T23:14:21.661+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-db-migrator | kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.565816402Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.025559ms policy-pap | [2024-03-15T23:14:21.680+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 policy-db-migrator | kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.571089922Z level=info msg="Executing migration" id="create alert_rule table" policy-pap | [2024-03-15T23:14:21.680+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.572154966Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.064365ms policy-pap | [2024-03-15T23:14:21.680+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461680 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.577324061Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" policy-pap | [2024-03-15T23:14:21.681+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=ad60098f-8467-4f6a-8a6c-235480b406c4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.578657794Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.338233ms policy-pap | [2024-03-15T23:14:21.681+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6be4621a-d017-49e7-bcd8-e5e0cbe56c95, alive=false, publisher=null]]: starting policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.582240919Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" policy-pap | [2024-03-15T23:14:21.681+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.583313674Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.072674ms policy-pap | acks = -1 policy-db-migrator | kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.589190902Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 0100-upgrade.sql kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.590354549Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.163247ms policy-pap | batch.size = 16384 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.596717323Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | select 'upgrade to 1100 completed' as msg kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.596786576Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=70.452µs policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.600025619Z level=info msg="Executing migration" id="add column for to alert_rule" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.606417984Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.394885ms policy-pap | client.id = producer-2 policy-db-migrator | msg kafka | [2024-03-15 23:14:22,278] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.612065335Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-pap | compression.type = none policy-db-migrator | upgrade to 1100 completed kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.619487563Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=7.424438ms policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.627981336Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.634823975Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.841979ms policy-pap | enable.idempotence = true policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.639176715Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-pap | interceptor.classes = [] policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.639911238Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=734.633µs policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.644818696Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-pap | linger.ms = 0 policy-db-migrator | kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.646683436Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.861479ms policy-pap | max.block.ms = 60000 policy-db-migrator | kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.650327332Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-pap | max.in.flight.requests.per.connection = 5 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.658870196Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=8.543424ms policy-pap | max.request.size = 1048576 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.664757905Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.66896167Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.203185ms policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | -------------- kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.672491423Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-pap | metric.reporters = [] policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.673458814Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=966.961µs kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.677936298Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.recording.level = INFO policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-03-15T23:13:51.687383151Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.450654ms kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.689802728Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.69421659Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.413032ms kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.699431207Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.class = null policy-db-migrator | > upgrade 0120-audit_sequence.sql grafana | logger=migrator t=2024-03-15T23:13:51.69951643Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=86.263µs kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | partitioner.ignore.keys = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.707276749Z level=info msg="Executing migration" id="create alert_rule_version table" kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-03-15T23:13:51.709111407Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.834499ms policy-pap | receive.buffer.bytes = 32768 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.712777135Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.714687776Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.909921ms policy-pap | reconnect.backoff.ms = 50 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.720053918Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-pap | request.timeout.ms = 30000 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) grafana | logger=migrator t=2024-03-15T23:13:51.721127703Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.073435ms policy-pap | retries = 2147483647 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.725985888Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-pap | retry.backoff.ms = 100 kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.726085332Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=100.684µs policy-pap | sasl.client.callback.handler.class = null kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.729599534Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-pap | sasl.jaas.config = null kafka | [2024-03-15 23:14:22,279] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0130-statistics_sequence.sql grafana | logger=migrator t=2024-03-15T23:13:51.738835721Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.237386ms policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-03-15 23:14:22,279] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.743014235Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-03-15T23:13:51.750335499Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.318685ms policy-pap | sasl.kerberos.service.name = null kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.754636637Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.759274236Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.637109ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.770850857Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-pap | sasl.login.callback.handler.class = null kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-03-15T23:13:51.780196197Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=9.34581ms policy-pap | sasl.login.class = null kafka | [2024-03-15 23:14:22,441] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.784710062Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.791316343Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.605582ms policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.794517946Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-03-15T23:13:51.794566638Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=48.992µs policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.797482301Z level=info msg="Executing migration" id=create_alert_configuration_table policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.798046099Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=563.508µs policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-03-15T23:13:51.802465861Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-03-15T23:13:51.813813675Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=11.349224ms policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-03-15T23:13:51.818786064Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-pap | sasl.mechanism = GSSAPI kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-03-15T23:13:51.818833856Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=47.922µs kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-03-15T23:13:51.821590484Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-03-15T23:13:51.828096763Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.505689ms kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-03-15T23:13:51.833550618Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | DROP TABLE pdpstatistics policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:51.83455352Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.002962ms kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-03-15T23:13:51.838149035Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-03-15T23:13:51.849789198Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=11.641103ms kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-03-15T23:13:51.858733705Z level=info msg="Executing migration" id=create_ngalert_configuration_table kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-03-15T23:13:51.859995836Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.261401ms kafka | [2024-03-15 23:14:22,442] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-03-15T23:13:51.865233144Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-03-15T23:13:51.866489074Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.257701ms kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | security.providers = null grafana | logger=migrator t=2024-03-15T23:13:51.870535094Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-03-15T23:13:51.877248619Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.712995ms kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-03-15T23:13:51.880456272Z level=info msg="Executing migration" id="create provenance_type table" kafka | [2024-03-15 23:14:22,443] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-03-15T23:13:51.881232707Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=775.805µs kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-03-15T23:13:51.88727191Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | DROP TABLE statistics_sequence policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-03-15T23:13:51.88852402Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.25197ms kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.895042439Z level=info msg="Executing migration" id="create alert_image table" policy-db-migrator | policy-pap | ssl.engine.factory.class = null kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.896265549Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.22876ms policy-db-migrator | policyadmin: OK: upgrade (1300) policy-pap | ssl.key.password = null kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.901192647Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-db-migrator | name version policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.901966652Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=774.144µs policy-db-migrator | policyadmin 1300 policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-03-15 23:14:22,444] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.907385855Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-db-migrator | ID script operation from_version to_version tag success atTime policy-pap | ssl.keystore.key = null kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.90752469Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=138.475µs policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.keystore.location = null kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.912538051Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.keystore.password = null kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.914287927Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.749586ms policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.keystore.type = JKS kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.918366027Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.919363619Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=997.542µs policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.provider = null kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.92466996Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.secure.random.implementation = null kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.925343871Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.928858414Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.truststore.certificates = null kafka | [2024-03-15 23:14:22,445] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.929544116Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=685.322µs policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:50 policy-pap | ssl.truststore.location = null kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.934793164Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | ssl.truststore.password = null kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.936074535Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.281191ms policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | ssl.truststore.type = JKS kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.941805009Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | transaction.timeout.ms = 60000 kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.951676996Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.872236ms policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | transactional.id = null kafka | [2024-03-15 23:14:22,446] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.957528863Z level=info msg="Executing migration" id="create library_element table v1" policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.959456145Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.928112ms policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.967179563Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.682+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.968392692Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.212468ms policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.971863003Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 kafka | [2024-03-15 23:14:22,449] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.972782202Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=919.489µs policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710544461685 kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.97925461Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=6be4621a-d017-49e7-bcd8-e5e0cbe56c95, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.980944254Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.689484ms policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.989638803Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.685+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.991231294Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.592641ms policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.687+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:51.994929262Z level=info msg="Executing migration" id="increase max description length to 2048" policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.690+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers grafana | logger=migrator t=2024-03-15T23:13:51.994967914Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=40.172µs kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers grafana | logger=migrator t=2024-03-15T23:13:51.998695843Z level=info msg="Executing migration" id="alter library_element model to mediumtext" kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock grafana | logger=migrator t=2024-03-15T23:13:51.998770536Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=78.773µs kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests grafana | logger=migrator t=2024-03-15T23:13:52.004063178Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|TimerManager|Thread-10] timer manager state-change started grafana | logger=migrator t=2024-03-15T23:13:52.004515859Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=452.831µs kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.693+00:00|INFO|TimerManager|Thread-9] timer manager update started grafana | logger=migrator t=2024-03-15T23:13:52.009910246Z level=info msg="Executing migration" id="create data_keys table" kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.694+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer grafana | logger=migrator t=2024-03-15T23:13:52.011172652Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.261766ms kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.694+00:00|INFO|ServiceManager|main] Policy PAP started grafana | logger=migrator t=2024-03-15T23:13:52.014894537Z level=info msg="Executing migration" id="create secrets table" kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:51 policy-pap | [2024-03-15T23:14:21.696+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.587 seconds (process running for 11.243) grafana | logger=migrator t=2024-03-15T23:13:52.015880964Z level=info msg="Migration successfully executed" id="create secrets table" duration=986.117µs kafka | [2024-03-15 23:14:22,451] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.156+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: LbZnmjPNTK-gKtiXPvevcA grafana | logger=migrator t=2024-03-15T23:13:52.020373561Z level=info msg="Executing migration" id="rename data_keys name column to id" kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-03-15T23:13:52.056326712Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=35.944731ms kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.156+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Cluster ID: LbZnmjPNTK-gKtiXPvevcA grafana | logger=migrator t=2024-03-15T23:13:52.065064948Z level=info msg="Executing migration" id="add name column into data_keys" kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.157+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: LbZnmjPNTK-gKtiXPvevcA grafana | logger=migrator t=2024-03-15T23:13:52.077028685Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=11.968707ms kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.218+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-03-15T23:13:52.080604575Z level=info msg="Executing migration" id="copy data_keys id column values into name" kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.218+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: LbZnmjPNTK-gKtiXPvevcA grafana | logger=migrator t=2024-03-15T23:13:52.080791431Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=186.255µs kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.265+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-03-15T23:13:52.084591107Z level=info msg="Executing migration" id="rename data_keys name column to label" kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.291+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 grafana | logger=migrator t=2024-03-15T23:13:52.118355977Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.75713ms kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-pap | [2024-03-15T23:14:22.304+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 grafana | logger=migrator t=2024-03-15T23:13:52.124812429Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | [2024-03-15T23:14:22.343+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-03-15T23:13:52.1540007Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.193461ms policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-pap | [2024-03-15T23:14:22.392+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-03-15T23:13:52.164387752Z level=info msg="Executing migration" id="create kv_store table v1" policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 policy-pap | [2024-03-15T23:14:22.452+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.166297896Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.910344ms policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | [2024-03-15T23:14:22.501+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.172205492Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-03-15T23:14:22.559+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.173362655Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.156373ms kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | [2024-03-15T23:14:22.606+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.179404315Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | [2024-03-15T23:14:22.666+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.179796466Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=391.541µs kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | [2024-03-15T23:14:22.712+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.184792486Z level=info msg="Executing migration" id="create permission table" kafka | [2024-03-15 23:14:22,452] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-03-15T23:14:22.771+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.186239847Z level=info msg="Migration successfully executed" id="create permission table" duration=1.447231ms kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | [2024-03-15T23:14:22.820+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.194909141Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-03-15T23:14:22.878+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.196732572Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.829601ms kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-03-15T23:14:22.926+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.201035073Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | [2024-03-15T23:14:22.987+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.202115704Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.080351ms kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | [2024-03-15T23:14:23.035+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.213144904Z level=info msg="Executing migration" id="create role table" kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | [2024-03-15T23:14:23.098+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.21406886Z level=info msg="Migration successfully executed" id="create role table" duration=924.036µs kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-03-15T23:14:23.145+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:52 grafana | logger=migrator t=2024-03-15T23:13:52.222624811Z level=info msg="Executing migration" id="add column display_name" kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-03-15T23:14:23.155+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] (Re-)joining group policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.230446451Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.8188ms kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-03-15T23:14:23.202+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.233858687Z level=info msg="Executing migration" id="add column group_name" kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | [2024-03-15T23:14:23.204+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.241008008Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.146971ms kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Request joining group due to: need to re-join with the given member-id: consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.25104458Z level=info msg="Executing migration" id="add index role.org_id" kafka | [2024-03-15 23:14:22,453] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.25208563Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.037109ms kafka | [2024-03-15 23:14:22,454] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.259169059Z level=info msg="Executing migration" id="add unique index role_org_id_name" kafka | [2024-03-15 23:14:22,454] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-03-15T23:14:23.222+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.260183377Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.014008ms kafka | [2024-03-15 23:14:22,454] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-03-15T23:14:23.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] (Re-)joining group policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.264460788Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-03-15 23:14:22,461] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | [2024-03-15T23:14:26.254+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Successfully joined group with generation Generation{generationId=1, memberId='consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e', protocol='range'} policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.265482746Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.021598ms kafka | [2024-03-15 23:14:22,467] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | [2024-03-15T23:14:26.260+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442', protocol='range'} policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.271128305Z level=info msg="Executing migration" id="create team role table" kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:26.271+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Finished assignment for group at generation 1: {consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.27199628Z level=info msg="Migration successfully executed" id="create team role table" duration=865.875µs kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:26.271+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.278005289Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:26.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Successfully synced group in generation Generation{generationId=1, memberId='consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e', protocol='range'} policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.279092549Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.08697ms kafka | [2024-03-15 23:14:22,469] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:26.301+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442', protocol='range'} policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 grafana | logger=migrator t=2024-03-15T23:13:52.289806361Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 policy-pap | [2024-03-15T23:14:26.302+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-03-15T23:13:52.290597093Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=790.372µs kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 policy-pap | [2024-03-15T23:14:26.303+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-03-15T23:13:52.301275003Z level=info msg="Executing migration" id="add index team_role.team_id" kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 policy-pap | [2024-03-15T23:14:26.310+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=migrator t=2024-03-15T23:13:52.302310782Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.035939ms kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 policy-pap | [2024-03-15T23:14:26.322+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=migrator t=2024-03-15T23:13:52.310513703Z level=info msg="Executing migration" id="create user role table" kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 policy-pap | [2024-03-15T23:14:26.331+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-03-15T23:13:52.311654255Z level=info msg="Migration successfully executed" id="create user role table" duration=1.142242ms kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:53 policy-pap | [2024-03-15T23:14:26.333+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-03-15T23:13:52.317706326Z level=info msg="Executing migration" id="add index user_role.org_id" kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:26.358+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-03-15T23:13:52.318885319Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.179203ms kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:26.360+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3, groupId=a833d76c-6968-4ee8-9b4d-b3fefbf07611] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-03-15T23:13:52.322734977Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:28.653+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' grafana | logger=migrator t=2024-03-15T23:13:52.323761956Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.027129ms kafka | [2024-03-15 23:14:22,470] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:28.653+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' grafana | logger=migrator t=2024-03-15T23:13:52.329329983Z level=info msg="Executing migration" id="add index user_role.user_id" kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:28.655+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 2 ms grafana | logger=migrator t=2024-03-15T23:13:52.330353331Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.023238ms kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.488+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: grafana | logger=migrator t=2024-03-15T23:13:52.334381575Z level=info msg="Executing migration" id="create builtin role table" kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [] policy-pap | [2024-03-15T23:14:43.489+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.489+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-03-15T23:13:52.335184847Z level=info msg="Migration successfully executed" id="create builtin role table" duration=805.122µs kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ba446a9c-6622-41fc-a636-ab4cca84c30b","timestampMs":1710544483450,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-03-15T23:13:52.341906556Z level=info msg="Executing migration" id="add index builtin_role.role_id" kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.497+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-03-15T23:13:52.342921395Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.015159ms kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.598+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting grafana | logger=migrator t=2024-03-15T23:13:52.352080303Z level=info msg="Executing migration" id="add index builtin_role.name" kafka | [2024-03-15 23:14:22,471] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.598+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting listener grafana | logger=migrator t=2024-03-15T23:13:52.352824814Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=742.78µs kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.598+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting timer grafana | logger=migrator t=2024-03-15T23:13:52.35981209Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.599+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] grafana | logger=migrator t=2024-03-15T23:13:52.368555796Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.748096ms kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.601+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting enqueue grafana | logger=migrator t=2024-03-15T23:13:52.373522616Z level=info msg="Executing migration" id="add index builtin_role.org_id" kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.601+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] grafana | logger=migrator t=2024-03-15T23:13:52.374768931Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.246775ms kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1503242313500800u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.601+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate started grafana | logger=migrator t=2024-03-15T23:13:52.38113429Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:54 policy-pap | [2024-03-15T23:14:43.603+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:52.382527399Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.392089ms kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:52.394784704Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.640+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-03-15T23:13:52.395925056Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.138842ms kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:52.401856293Z level=info msg="Executing migration" id="add unique index role.uid" kafka | [2024-03-15 23:14:22,472] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-03-15T23:13:52.402986285Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.129562ms kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.642+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.407303766Z level=info msg="Executing migration" id="create seed assignment table" policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"a918cf66-cf68-45ea-b4be-5105781f3d6f","timestampMs":1710544483578,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.408253273Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=948.287µs policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.642+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-03-15T23:13:52.412103131Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.672+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-03-15T23:13:52.41382474Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.721309ms kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-03-15T23:13:52.420682043Z level=info msg="Executing migration" id="add column hidden to role table" kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.673+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:52.428965496Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.287744ms kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c8f034e9-82b5-4f8d-b347-826ceabb026b","timestampMs":1710544483657,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-03-15T23:13:52.433979637Z level=info msg="Executing migration" id="permission kind migration" kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1503242313500900u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.678+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-03-15T23:13:52.443591907Z level=info msg="Migration successfully executed" id="permission kind migration" duration=9.610081ms kafka | [2024-03-15 23:14:22,473] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.682+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:52.44901514Z level=info msg="Executing migration" id="permission attribute migration" kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:52.457378835Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.365886ms kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.701+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping grafana | logger=migrator t=2024-03-15T23:13:52.460945035Z level=info msg="Executing migration" id="permission identifier migration" kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-03-15T23:13:52.466539393Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.633459ms kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping timer grafana | logger=migrator t=2024-03-15T23:13:52.47889192Z level=info msg="Executing migration" id="add permission identifier index" kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] grafana | logger=migrator t=2024-03-15T23:13:52.479894488Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.005268ms kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping listener policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 grafana | logger=migrator t=2024-03-15T23:13:52.484114077Z level=info msg="Executing migration" id="add permission action scope role_id index" kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:43.702+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopped policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:55 grafana | logger=migrator t=2024-03-15T23:13:52.485155136Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.041099ms kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1503242313501000u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.490587379Z level=info msg="Executing migration" id="remove permission role_id action scope index" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"a918cf66-cf68-45ea-b4be-5105781f3d6f","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"dc56e0cf-4911-4e66-a485-4debe52e093d","timestampMs":1710544483663,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1503242313501100u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.491374851Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=784.892µs kafka | [2024-03-15 23:14:22,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate successful policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.494207151Z level=info msg="Executing migration" id="create query_history table v1" kafka | [2024-03-15 23:14:22,475] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 start publishing next request policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.49488984Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=683.179µs kafka | [2024-03-15 23:14:22,477] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.50553148Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.709+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting listener policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1503242313501200u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.506386064Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=854.464µs kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting timer policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1503242313501300u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.509910203Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1503242313501300u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.509958594Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=48.821µs kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a918cf66-cf68-45ea-b4be-5105781f3d6f policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1503242313501300u 1 2024-03-15 23:13:56 grafana | logger=migrator t=2024-03-15T23:13:52.512740062Z level=info msg="Executing migration" id="rbac disabled migrator" kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange starting enqueue policy-db-migrator | policyadmin: OK @ 1300 grafana | logger=migrator t=2024-03-15T23:13:52.512778303Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=39.021µs policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange started grafana | logger=migrator t=2024-03-15T23:13:52.515814259Z level=info msg="Executing migration" id="teams permissions migration" kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.516288212Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=474.633µs policy-pap | [2024-03-15T23:14:43.710+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.711+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:52.52153017Z level=info msg="Executing migration" id="dashboard permissions" kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:52.522215809Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=687.089µs kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.726+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-03-15T23:13:52.525137431Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:52.525880062Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=743.001µs kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.729+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE grafana | logger=migrator t=2024-03-15T23:13:52.53042238Z level=info msg="Executing migration" id="drop managed folder create actions" policy-pap | [2024-03-15T23:14:43.734+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.530634686Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=212.846µs policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-03-15 23:14:22,478] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.533568318Z level=info msg="Executing migration" id="alerting notification permissions" policy-pap | [2024-03-15T23:14:43.735+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 5b704fa0-786f-426e-ab49-de6046b0a817 grafana | logger=migrator t=2024-03-15T23:13:52.534064332Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=496.044µs kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.762+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:52.541035398Z level=info msg="Executing migration" id="create query_history_star table v1" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"5b704fa0-786f-426e-ab49-de6046b0a817","timestampMs":1710544483579,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:52.543173259Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=2.13684ms kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.762+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE grafana | logger=migrator t=2024-03-15T23:13:52.550291119Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.767+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-03-15T23:13:52.55139871Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.107791ms kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"5b704fa0-786f-426e-ab49-de6046b0a817","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"f008dd50-9471-4f36-80d6-f78aa5ec5aec","timestampMs":1710544483724,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-03-15T23:13:52.556824723Z level=info msg="Executing migration" id="add column org_id in query_history_star" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping grafana | logger=migrator t=2024-03-15T23:13:52.564760986Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.936433ms kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping enqueue grafana | logger=migrator t=2024-03-15T23:13:52.570986151Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping timer grafana | logger=migrator t=2024-03-15T23:13:52.571055103Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=70.052µs kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.768+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] grafana | logger=migrator t=2024-03-15T23:13:52.573861842Z level=info msg="Executing migration" id="create correlation table v1" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopping listener grafana | logger=migrator t=2024-03-15T23:13:52.574915152Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.052699ms kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange stopped grafana | logger=migrator t=2024-03-15T23:13:52.580366575Z level=info msg="Executing migration" id="add index correlations.uid" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpStateChange successful grafana | logger=migrator t=2024-03-15T23:13:52.583632377Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=3.264512ms kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 start publishing next request grafana | logger=migrator t=2024-03-15T23:13:52.590363556Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting grafana | logger=migrator t=2024-03-15T23:13:52.591524199Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.165173ms kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.769+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting listener grafana | logger=migrator t=2024-03-15T23:13:52.596084407Z level=info msg="Executing migration" id="add correlation config column" kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting timer grafana | logger=migrator t=2024-03-15T23:13:52.602405295Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.320408ms kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=d2465129-9ed1-4fca-970a-e7296db7245c, expireMs=1710544513770] kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.606307395Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate starting enqueue kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.607112107Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=804.692µs policy-pap | [2024-03-15T23:14:43.770+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-03-15 23:14:22,479] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.610295887Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.612809418Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.51025ms policy-pap | [2024-03-15T23:14:43.771+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate started kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.622636254Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-pap | [2024-03-15T23:14:43.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.6448705Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.234365ms policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.651429594Z level=info msg="Executing migration" id="create correlation v2" policy-pap | [2024-03-15T23:14:43.782+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.65270711Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.276876ms policy-pap | [2024-03-15T23:14:43.782+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.658364679Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-pap | {"source":"pap-bc9b7321-9b51-42ef-97ab-0ee05971a3f1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"d2465129-9ed1-4fca-970a-e7296db7245c","timestampMs":1710544483752,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.660013556Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.641756ms policy-pap | [2024-03-15T23:14:43.782+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.664374368Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-pap | [2024-03-15T23:14:43.792+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.666331233Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.956755ms policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.671350215Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-pap | [2024-03-15T23:14:43.792+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.672546258Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.200164ms policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d2465129-9ed1-4fca-970a-e7296db7245c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae64ed1a-2bd0-452c-a2a3-d83350bdbf1d","timestampMs":1710544483781,"name":"apex-4a6e2547-14f7-4b7d-af5c-d49180142040","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.676486859Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.677114047Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=625.988µs policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d2465129-9ed1-4fca-970a-e7296db7245c kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.683832386Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping enqueue kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.685167973Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.334887ms policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping timer kafka | [2024-03-15 23:14:22,480] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.691212703Z level=info msg="Executing migration" id="add provisioning column" policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d2465129-9ed1-4fca-970a-e7296db7245c, expireMs=1710544513770] kafka | [2024-03-15 23:14:22,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.701695798Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.483815ms policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopping listener kafka | [2024-03-15 23:14:22,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.705216567Z level=info msg="Executing migration" id="create entity_events table" policy-pap | [2024-03-15T23:14:43.793+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate stopped kafka | [2024-03-15 23:14:22,481] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.706319608Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.102231ms policy-pap | [2024-03-15T23:14:43.802+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 PdpUpdate successful kafka | [2024-03-15 23:14:22,517] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.709642012Z level=info msg="Executing migration" id="create dashboard public config v1" policy-pap | [2024-03-15T23:14:43.803+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-4a6e2547-14f7-4b7d-af5c-d49180142040 has no more requests kafka | [2024-03-15 23:14:22,517] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.710874846Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.232074ms policy-pap | [2024-03-15T23:14:49.283+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.715788475Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | [2024-03-15T23:14:49.290+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.71702998Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-pap | [2024-03-15T23:14:49.676+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.720670932Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | [2024-03-15T23:14:50.243+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.721275029Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-pap | [2024-03-15T23:14:50.243+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.727322959Z level=info msg="Executing migration" id="Drop old dashboard public config table" kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) policy-pap | [2024-03-15T23:14:50.749+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.728827511Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.503882ms kafka | [2024-03-15 23:14:22,521] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) policy-pap | [2024-03-15T23:14:50.980+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 grafana | logger=migrator t=2024-03-15T23:13:52.738223886Z level=info msg="Executing migration" id="recreate dashboard public config v1" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) policy-pap | [2024-03-15T23:14:51.060+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=migrator t=2024-03-15T23:13:52.740119629Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.895093ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) policy-pap | [2024-03-15T23:14:51.060+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.74548272Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) policy-pap | [2024-03-15T23:14:51.060+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.747565819Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.079438ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | [2024-03-15T23:14:51.074+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-03-15T23:14:50Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-03-15T23:14:51Z, user=policyadmin)] grafana | logger=migrator t=2024-03-15T23:13:52.751663144Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-03-15T23:14:51.762+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.752925549Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.261935ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | [2024-03-15T23:14:51.763+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 grafana | logger=migrator t=2024-03-15T23:13:52.758762794Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-pap | [2024-03-15T23:14:51.764+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 grafana | logger=migrator t=2024-03-15T23:13:52.760539874Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.773219ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-03-15T23:14:51.764+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.764389972Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | [2024-03-15T23:14:51.764+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.766304816Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.913974ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-03-15T23:14:51.777+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-15T23:14:51Z, user=policyadmin)] grafana | logger=migrator t=2024-03-15T23:13:52.774097495Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup grafana | logger=migrator t=2024-03-15T23:13:52.77499201Z level=info msg="Migration successfully executed" id="Drop public config table" duration=894.045µs kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.781180034Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 grafana | logger=migrator t=2024-03-15T23:13:52.783126369Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.948975ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-03-15T23:14:52.126+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=migrator t=2024-03-15T23:13:52.786847564Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | [2024-03-15T23:14:52.127+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.788025087Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.177513ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-pap | [2024-03-15T23:14:52.127+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.792487032Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | [2024-03-15T23:14:52.139+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-15T23:14:52Z, user=policyadmin)] grafana | logger=migrator t=2024-03-15T23:13:52.79452192Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.033438ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | [2024-03-15T23:15:12.739+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.798761009Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | [2024-03-15T23:15:12.741+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup grafana | logger=migrator t=2024-03-15T23:13:52.800484127Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.723928ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-03-15T23:15:13.600+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=a918cf66-cf68-45ea-b4be-5105781f3d6f, expireMs=1710544513599] kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.805485638Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" policy-pap | [2024-03-15T23:15:13.711+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=5b704fa0-786f-426e-ab49-de6046b0a817, expireMs=1710544513710] kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.829938786Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.456898ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.834772722Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.84323829Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.464948ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.847045977Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.853795737Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.74867ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.8599436Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.8602848Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=341.15µs kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.865960289Z level=info msg="Executing migration" id="add share column" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.874917661Z level=info msg="Migration successfully executed" id="add share column" duration=8.957182ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.878464181Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.878840632Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=376.441µs kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.882454583Z level=info msg="Executing migration" id="create file table" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.883577675Z level=info msg="Migration successfully executed" id="create file table" duration=1.123032ms grafana | logger=migrator t=2024-03-15T23:13:52.888538385Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-03-15T23:13:52.889777829Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.239684ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.893666849Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.895048338Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.380759ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.901363245Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-03-15T23:13:52.902321092Z level=info msg="Migration successfully executed" id="create file_meta table" duration=955.037µs kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.909183955Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.911070368Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.886323ms kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.91646572Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.916600664Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=111.653µs kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.92036814Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-03-15 23:14:22,522] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.920539055Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=170.555µs kafka | [2024-03-15 23:14:22,524] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-03-15T23:13:52.924162327Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-03-15 23:14:22,524] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.925116624Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=953.887µs kafka | [2024-03-15 23:14:22,579] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:52.92962407Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-03-15 23:14:22,590] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:52.92996905Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=344.22µs kafka | [2024-03-15 23:14:22,592] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:52.935234588Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-03-15 23:14:22,593] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:52.937282626Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.047688ms kafka | [2024-03-15 23:14:22,594] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.942602876Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-03-15 23:14:22,607] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:52.951763823Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.160278ms kafka | [2024-03-15 23:14:22,608] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:52.955931581Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-03-15 23:14:22,608] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:52.956203078Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=271.618µs kafka | [2024-03-15 23:14:22,608] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:52.962922007Z level=info msg="Executing migration" id="Add index for uid in playlist" kafka | [2024-03-15 23:14:22,608] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.964910023Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.987566ms kafka | [2024-03-15 23:14:22,617] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:52.969216514Z level=info msg="Executing migration" id="update group index for alert rules" kafka | [2024-03-15 23:14:22,618] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:52.969583795Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=368.721µs kafka | [2024-03-15 23:14:22,618] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:52.973584637Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-03-15 23:14:22,618] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:52.973834224Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=249.267µs kafka | [2024-03-15 23:14:22,618] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:52.984605027Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-03-15 23:14:22,626] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:52.985599345Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=1.000548ms kafka | [2024-03-15 23:14:22,626] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:52.990275547Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-03-15 23:14:22,626] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.000184035Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.902518ms kafka | [2024-03-15 23:14:22,626] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.003914957Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-03-15T23:13:53.013006814Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.091447ms grafana | logger=migrator t=2024-03-15T23:13:53.018322974Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-03-15 23:14:22,627] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.019178062Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=854.797µs kafka | [2024-03-15 23:14:22,632] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.025514564Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-03-15 23:14:22,633] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,633] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,633] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.103296569Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=77.783875ms kafka | [2024-03-15 23:14:22,633] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.110885651Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-03-15 23:14:22,640] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.112200443Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.319892ms kafka | [2024-03-15 23:14:22,641] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.11928422Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-03-15 23:14:22,641] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.120531279Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.243809ms kafka | [2024-03-15 23:14:22,641] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.124421224Z level=info msg="Executing migration" id="add primary key to seed_assigment" kafka | [2024-03-15 23:14:22,641] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.150838058Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=26.413333ms kafka | [2024-03-15 23:14:22,661] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.156425196Z level=info msg="Executing migration" id="add origin column to seed_assignment" kafka | [2024-03-15 23:14:22,664] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.163689758Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.264342ms kafka | [2024-03-15 23:14:22,664] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.177993365Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" kafka | [2024-03-15 23:14:22,664] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.178809481Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=820.996µs kafka | [2024-03-15 23:14:22,664] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.183401708Z level=info msg="Executing migration" id="prevent seeding OnCall access" kafka | [2024-03-15 23:14:22,680] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.183861683Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=456.754µs kafka | [2024-03-15 23:14:22,681] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.188079127Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-03-15 23:14:22,681] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.188375077Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=295.36µs kafka | [2024-03-15 23:14:22,682] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.196040922Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-03-15 23:14:22,682] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.196368742Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=327.96µs kafka | [2024-03-15 23:14:22,691] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.203546082Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-03-15 23:14:22,694] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.204290885Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=750.464µs kafka | [2024-03-15 23:14:22,694] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.209477381Z level=info msg="Executing migration" id="create folder table" kafka | [2024-03-15 23:14:22,694] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.21132907Z level=info msg="Migration successfully executed" id="create folder table" duration=1.853959ms kafka | [2024-03-15 23:14:22,694] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.215196604Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-03-15 23:14:22,702] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.21664008Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.443086ms kafka | [2024-03-15 23:14:22,703] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.221916828Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-03-15 23:14:22,703] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.223299483Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.382685ms kafka | [2024-03-15 23:14:22,703] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.227145115Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-03-15 23:14:22,703] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.227262829Z level=info msg="Migration successfully executed" id="Update folder title length" duration=118.824µs kafka | [2024-03-15 23:14:22,713] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.231919438Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-03-15 23:14:22,713] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.234028835Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.109207ms kafka | [2024-03-15 23:14:22,714] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.238023003Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-03-15 23:14:22,714] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.239284903Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.26256ms grafana | logger=migrator t=2024-03-15T23:13:53.246677169Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-03-15 23:14:22,714] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.248420015Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.742526ms kafka | [2024-03-15 23:14:22,720] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.254545691Z level=info msg="Executing migration" id="Sync dashboard and folder table" kafka | [2024-03-15 23:14:22,720] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.255129979Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=584.678µs kafka | [2024-03-15 23:14:22,720] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.258642182Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" kafka | [2024-03-15 23:14:22,720] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.259025674Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=383.772µs kafka | [2024-03-15 23:14:22,720] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.263800166Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" kafka | [2024-03-15 23:14:22,727] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.265608694Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.808258ms kafka | [2024-03-15 23:14:22,727] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.270376657Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" kafka | [2024-03-15 23:14:22,727] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.272265907Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.889961ms kafka | [2024-03-15 23:14:22,727] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.276082109Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" kafka | [2024-03-15 23:14:22,727] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.277314748Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.232719ms kafka | [2024-03-15 23:14:22,735] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.281851223Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" kafka | [2024-03-15 23:14:22,736] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.283873508Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.021125ms kafka | [2024-03-15 23:14:22,736] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.290534551Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" kafka | [2024-03-15 23:14:22,736] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.291855233Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.321523ms kafka | [2024-03-15 23:14:22,736] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.297484513Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-03-15 23:14:22,743] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.298561637Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.076984ms kafka | [2024-03-15 23:14:22,743] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.302361538Z level=info msg="Executing migration" id="add unique index anon_device.device_id" kafka | [2024-03-15 23:14:22,743] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.303852776Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.493808ms kafka | [2024-03-15 23:14:22,743] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,743] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,749] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.309341481Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-03-15 23:14:22,749] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.311412618Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.071616ms kafka | [2024-03-15 23:14:22,749] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.316654045Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-03-15 23:14:22,749] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.318311838Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.657683ms kafka | [2024-03-15 23:14:22,749] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.322368098Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-03-15 23:14:22,756] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.323680519Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.312202ms kafka | [2024-03-15 23:14:22,757] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.330553129Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" kafka | [2024-03-15 23:14:22,757] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.333178563Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=2.620224ms kafka | [2024-03-15 23:14:22,757] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.34123988Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" kafka | [2024-03-15 23:14:22,757] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.341713836Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=474.696µs kafka | [2024-03-15 23:14:22,763] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.345446085Z level=info msg="Executing migration" id="Add folder_uid for dashboard" kafka | [2024-03-15 23:14:22,763] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.358531693Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.086118ms kafka | [2024-03-15 23:14:22,763] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.363002206Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-03-15 23:14:22,763] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.363917015Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=919.059µs grafana | logger=migrator t=2024-03-15T23:13:53.367358525Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-03-15 23:14:22,764] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,781] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.369298627Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.936832ms kafka | [2024-03-15 23:14:22,781] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.381722334Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" kafka | [2024-03-15 23:14:22,781] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.38378657Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=2.029554ms kafka | [2024-03-15 23:14:22,781] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.388619334Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" kafka | [2024-03-15 23:14:22,781] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.39004912Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.430756ms kafka | [2024-03-15 23:14:22,788] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.394000996Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" kafka | [2024-03-15 23:14:22,789] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.395314438Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.313462ms kafka | [2024-03-15 23:14:22,789] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.399246304Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" kafka | [2024-03-15 23:14:22,789] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.401540107Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.297564ms kafka | [2024-03-15 23:14:22,789] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.405456122Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-03-15 23:14:22,796] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.407743745Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.281933ms kafka | [2024-03-15 23:14:22,797] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.412831098Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-03-15 23:14:22,797] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.413740717Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=910.229µs kafka | [2024-03-15 23:14:22,797] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.420834413Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" kafka | [2024-03-15 23:14:22,797] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.421382431Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=552.308µs kafka | [2024-03-15 23:14:22,804] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.427969091Z level=info msg="Executing migration" id="alter kv_store.value to longtext" kafka | [2024-03-15 23:14:22,805] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.428134766Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=164.495µs kafka | [2024-03-15 23:14:22,805] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.431926658Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" kafka | [2024-03-15 23:14:22,805] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.443105395Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.179137ms kafka | [2024-03-15 23:14:22,805] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-03-15T23:13:53.449194969Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" kafka | [2024-03-15 23:14:22,813] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-03-15T23:13:53.45891738Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.720951ms kafka | [2024-03-15 23:14:22,814] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-03-15T23:13:53.467744772Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" kafka | [2024-03-15 23:14:22,814] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.468181286Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=436.824µs kafka | [2024-03-15 23:14:22,814] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-03-15T23:13:53.47144051Z level=info msg="migrations completed" performed=547 skipped=0 duration=4.112204977s kafka | [2024-03-15 23:14:22,814] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=sqlstore t=2024-03-15T23:13:53.481925275Z level=info msg="Created default admin" user=admin kafka | [2024-03-15 23:14:22,826] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=sqlstore t=2024-03-15T23:13:53.48238634Z level=info msg="Created default organization" kafka | [2024-03-15 23:14:22,828] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=secrets t=2024-03-15T23:13:53.488576197Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-03-15 23:14:22,828] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) grafana | logger=plugin.store t=2024-03-15T23:13:53.509152545Z level=info msg="Loading plugins..." kafka | [2024-03-15 23:14:22,828] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=local.finder t=2024-03-15T23:13:53.551316162Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-03-15 23:14:22,828] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=plugin.store t=2024-03-15T23:13:53.551350483Z level=info msg="Plugins loaded" count=55 duration=42.199738ms kafka | [2024-03-15 23:14:22,836] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=query_data t=2024-03-15T23:13:53.55813338Z level=info msg="Query Service initialization" kafka | [2024-03-15 23:14:22,836] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=live.push_http t=2024-03-15T23:13:53.561831788Z level=info msg="Live Push Gateway initialization" kafka | [2024-03-15 23:14:22,836] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=ngalert.migration t=2024-03-15T23:13:53.56721181Z level=info msg=Starting kafka | [2024-03-15 23:14:22,836] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.migration t=2024-03-15T23:13:53.567642783Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false kafka | [2024-03-15 23:14:22,836] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.migration orgID=1 t=2024-03-15T23:13:53.568230842Z level=info msg="Migrating alerts for organisation" kafka | [2024-03-15 23:14:22,849] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=ngalert.migration orgID=1 t=2024-03-15T23:13:53.568868903Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-03-15 23:14:22,849] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=ngalert.migration t=2024-03-15T23:13:53.570836725Z level=info msg="Completed alerting migration" kafka | [2024-03-15 23:14:22,849] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,849] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.state.manager t=2024-03-15T23:13:53.598173469Z level=info msg="Running in alternative execution of Error/NoData mode" kafka | [2024-03-15 23:14:22,849] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=infra.usagestats.collector t=2024-03-15T23:13:53.599951896Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-03-15 23:14:22,855] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=provisioning.datasources t=2024-03-15T23:13:53.601725771Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-03-15 23:14:22,855] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=provisioning.alerting t=2024-03-15T23:13:53.614811099Z level=info msg="starting to provision alerting" kafka | [2024-03-15 23:14:22,855] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=provisioning.alerting t=2024-03-15T23:13:53.61482731Z level=info msg="finished to provision alerting" kafka | [2024-03-15 23:14:22,856] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.state.manager t=2024-03-15T23:13:53.615066977Z level=info msg="Warming state cache for startup" kafka | [2024-03-15 23:14:22,856] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.multiorg.alertmanager t=2024-03-15T23:13:53.615248193Z level=info msg="Starting MultiOrg Alertmanager" kafka | [2024-03-15 23:14:22,863] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=ngalert.state.manager t=2024-03-15T23:13:53.615591794Z level=info msg="State cache has been initialized" states=0 duration=525.377µs kafka | [2024-03-15 23:14:22,863] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=ngalert.scheduler t=2024-03-15T23:13:53.615640506Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 kafka | [2024-03-15 23:14:22,864] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) grafana | logger=ticker t=2024-03-15T23:13:53.615708068Z level=info msg=starting first_tick=2024-03-15T23:14:00Z kafka | [2024-03-15 23:14:22,864] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=grafanaStorageLogger t=2024-03-15T23:13:53.616906196Z level=info msg="Storage starting" kafka | [2024-03-15 23:14:22,864] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=http.server t=2024-03-15T23:13:53.618464056Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= kafka | [2024-03-15 23:14:22,869] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=provisioning.dashboard t=2024-03-15T23:13:53.654511198Z level=info msg="starting to provision dashboards" kafka | [2024-03-15 23:14:22,870] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=sqlstore.transactions t=2024-03-15T23:13:53.67118173Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-03-15 23:14:22,870] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=sqlstore.transactions t=2024-03-15T23:13:53.681660815Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" kafka | [2024-03-15 23:14:22,870] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=plugins.update.checker t=2024-03-15T23:13:53.708179012Z level=info msg="Update check succeeded" duration=93.203948ms kafka | [2024-03-15 23:14:22,870] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) grafana | logger=grafana.update.checker t=2024-03-15T23:13:53.72502925Z level=info msg="Update check succeeded" duration=110.062236ms kafka | [2024-03-15 23:14:22,879] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=provisioning.dashboard t=2024-03-15T23:13:53.969403907Z level=info msg="finished to provision dashboards" kafka | [2024-03-15 23:14:22,880] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=grafana-apiserver t=2024-03-15T23:13:54.219888137Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" kafka | [2024-03-15 23:14:22,881] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=grafana-apiserver t=2024-03-15T23:13:54.220306479Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" kafka | [2024-03-15 23:14:22,881] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=infra.usagestats t=2024-03-15T23:14:39.629423581Z level=info msg="Usage stats are ready to report" kafka | [2024-03-15 23:14:22,881] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,887] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,888] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,888] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,888] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,888] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,895] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,896] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,896] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,896] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,896] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,903] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,903] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,904] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,904] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,904] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,910] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,911] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,911] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,911] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,911] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,918] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,919] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,919] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,919] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,919] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(RYQK08lOSYaXD4Alb86gyg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,926] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,927] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,927] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,927] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,927] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,934] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,935] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,935] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,935] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,935] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,942] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,943] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,943] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,943] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,943] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,952] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,958] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,958] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,958] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,958] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,967] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,968] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,968] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,968] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,968] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,977] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,977] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,977] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,977] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,977] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,983] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,984] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,984] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,984] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,984] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,992] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:22,993] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:22,993] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,993] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:22,993] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:22,999] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,000] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,000] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,000] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,001] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,008] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,008] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,008] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,008] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,008] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,016] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,017] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,017] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,017] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,017] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,025] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,025] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,025] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,025] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,025] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,032] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,033] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,033] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,033] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,033] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,040] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,041] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,041] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,041] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,041] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,048] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,048] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,048] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,048] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,048] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,054] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-03-15 23:14:23,055] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-03-15 23:14:23,055] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,055] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-03-15 23:14:23,055] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(R2o1IzsbR_ucSKqMoC8FrA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-03-15 23:14:23,059] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-03-15 23:14:23,060] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-03-15 23:14:23,066] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,070] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,071] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,071] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,071] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,071] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,072] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,072] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,073] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,073] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,076] INFO [Broker id=1] Finished LeaderAndIsr request in 603ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-03-15 23:14:23,077] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,080] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,081] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,082] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,083] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,083] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=R2o1IzsbR_ucSKqMoC8FrA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=RYQK08lOSYaXD4Alb86gyg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,084] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,085] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,087] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,087] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 17 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 18 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-03-15 23:14:23,097] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,099] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,100] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,101] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-03-15 23:14:23,102] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-03-15 23:14:23,214] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group a833d76c-6968-4ee8-9b4d-b3fefbf07611 in Empty state. Created a new member id consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,214] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,237] INFO [GroupCoordinator 1]: Preparing to rebalance group a833d76c-6968-4ee8-9b4d-b3fefbf07611 in state PreparingRebalance with old generation 0 (__consumer_offsets-44) (reason: Adding new member consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,237] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,840] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 in Empty state. Created a new member id consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:23,848] INFO [GroupCoordinator 1]: Preparing to rebalance group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:26,251] INFO [GroupCoordinator 1]: Stabilized group a833d76c-6968-4ee8-9b4d-b3fefbf07611 generation 1 (__consumer_offsets-44) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:26,258] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:26,283] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-99fdea8c-1b20-42a4-83af-e5069d439442 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:26,283] INFO [GroupCoordinator 1]: Assignment received from leader consumer-a833d76c-6968-4ee8-9b4d-b3fefbf07611-3-35a9ab49-163f-457d-aaa8-ddc8c3a1db0e for group a833d76c-6968-4ee8-9b4d-b3fefbf07611 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:26,850] INFO [GroupCoordinator 1]: Stabilized group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-03-15 23:14:26,871] INFO [GroupCoordinator 1]: Assignment received from leader consumer-2f21b508-fe17-4ab8-9275-1762b58c9ac3-2-e5946d81-a534-498f-907f-81e67fc41f70 for group 2f21b508-fe17-4ab8-9275-1762b58c9ac3 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping policy-api ... Stopping grafana ... Stopping kafka ... Stopping mariadb ... Stopping simulator ... Stopping compose_zookeeper_1 ... Stopping prometheus ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping policy-pap ... done Stopping simulator ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing policy-api ... Removing grafana ... Removing kafka ... Removing policy-db-migrator ... Removing mariadb ... Removing simulator ... Removing compose_zookeeper_1 ... Removing prometheus ... Removing policy-api ... done Removing policy-apex-pdp ... done Removing policy-db-migrator ... done Removing simulator ... done Removing grafana ... done Removing kafka ... done Removing prometheus ... done Removing policy-pap ... done Removing mariadb ... done Removing compose_zookeeper_1 ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.Xn1lruRwEW ]] + rsync -av /tmp/tmp.Xn1lruRwEW/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 919,289 bytes received 95 bytes 1,838,768.00 bytes/sec total size is 918,743 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 1 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2078 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9688890317075254486.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10598062073148952752.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1620010647216354571.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1491707428034886176.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config11676880199612686298tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15389883869151244807.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6936332521640930828.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2811132065335310372.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2940540099925184987.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins11354269088158233846.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-48nb from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-48nb/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1611 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-13424 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 858 24866 0 6441 30852 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:1a:ce:12 brd ff:ff:ff:ff:ff:ff inet 10.30.107.209/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85967sec preferred_lft 85967sec inet6 fe80::f816:3eff:fe1a:ce12/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:14:ae:a1:6f brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-13424) 03/15/24 _x86_64_ (8 CPU) 23:10:24 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:01 121.41 39.81 81.60 1813.83 27762.71 23:13:01 131.13 23.11 108.02 2766.74 33297.52 23:14:01 548.63 13.00 535.63 796.17 168179.10 23:15:01 33.96 0.47 33.49 34.13 26939.23 23:16:01 16.56 0.00 16.56 0.00 21042.84 23:17:01 68.17 0.92 67.26 49.33 23184.49 Average: 153.31 12.88 140.43 910.03 50067.65 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 30077480 31668912 2861740 8.69 70288 1831696 1424148 4.19 901128 1667928 158272 23:13:01 28482120 31641676 4457100 13.53 105288 3315136 1593772 4.69 1011416 3052624 1302884 23:14:01 24484332 30655996 8454888 25.67 157760 6116164 7404776 21.79 2152420 5697360 172 23:15:01 23238468 29526652 9700752 29.45 159324 6227464 8849140 26.04 3343376 5737800 220 23:16:01 23221784 29510860 9717436 29.50 159448 6228040 8849652 26.04 3361056 5737756 284 23:17:01 25436552 31558324 7502668 22.78 160652 6076568 1571116 4.62 1356812 5589880 3052 Average: 25823456 30760403 7115764 21.60 135460 4965845 4948767 14.56 2021035 4580558 244147 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 lo 1.73 1.73 0.18 0.18 0.00 0.00 0.00 0.00 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 173.79 112.63 1048.02 40.09 0.00 0.00 0.00 0.00 23:13:01 lo 7.00 7.00 0.65 0.65 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 br-ecaf75b0a48b 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 226.65 147.83 6510.28 15.21 0.00 0.00 0.00 0.00 23:14:01 vethe55f17e 0.45 0.70 0.05 0.30 0.00 0.00 0.00 0.00 23:14:01 lo 6.60 6.60 0.66 0.66 0.00 0.00 0.00 0.00 23:14:01 veth2fef3ae 29.88 38.39 2.96 4.51 0.00 0.00 0.00 0.00 23:14:01 vethae330f7 0.53 0.67 0.03 0.04 0.00 0.00 0.00 0.00 23:15:01 vethe55f17e 0.13 0.25 0.01 0.01 0.00 0.00 0.00 0.00 23:15:01 lo 5.07 5.07 3.51 3.51 0.00 0.00 0.00 0.00 23:15:01 veth2fef3ae 75.90 88.85 74.11 26.86 0.00 0.00 0.00 0.01 23:15:01 vethae330f7 45.89 40.69 10.79 36.41 0.00 0.00 0.00 0.00 23:16:01 vethe55f17e 0.15 0.05 0.01 0.00 0.00 0.00 0.00 0.00 23:16:01 lo 4.82 4.82 0.36 0.36 0.00 0.00 0.00 0.00 23:16:01 veth2fef3ae 1.50 1.72 0.54 0.39 0.00 0.00 0.00 0.00 23:16:01 vethae330f7 8.67 11.45 2.09 1.32 0.00 0.00 0.00 0.00 23:17:01 lo 5.43 5.43 0.48 0.48 0.00 0.00 0.00 0.00 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 ens3 1680.80 1050.91 35260.45 173.00 0.00 0.00 0.00 0.00 Average: lo 5.11 5.11 0.97 0.97 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 243.68 147.91 5775.89 23.26 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-13424) 03/15/24 _x86_64_ (8 CPU) 23:10:24 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 10.56 0.00 0.77 2.25 0.04 86.38 23:12:01 0 4.05 0.00 0.47 0.55 0.02 94.91 23:12:01 1 13.70 0.00 1.03 0.53 0.05 84.68 23:12:01 2 0.68 0.00 0.30 0.38 0.07 98.56 23:12:01 3 0.55 0.00 0.42 13.50 0.03 85.49 23:12:01 4 12.11 0.00 0.90 0.67 0.02 86.30 23:12:01 5 33.52 0.00 1.64 1.99 0.07 62.79 23:12:01 6 16.68 0.00 0.98 0.30 0.03 82.00 23:12:01 7 3.22 0.00 0.40 0.12 0.02 96.25 23:13:01 all 11.45 0.00 2.18 2.40 0.04 83.93 23:13:01 0 6.08 0.00 2.10 0.55 0.03 91.24 23:13:01 1 19.55 0.00 2.60 1.69 0.05 76.11 23:13:01 2 7.09 0.00 1.57 2.36 0.03 88.95 23:13:01 3 3.84 0.00 1.93 9.44 0.05 84.74 23:13:01 4 5.03 0.00 1.59 0.03 0.03 93.31 23:13:01 5 29.98 0.00 3.49 1.95 0.07 64.51 23:13:01 6 15.39 0.00 2.48 0.75 0.03 81.34 23:13:01 7 4.75 0.00 1.68 2.39 0.02 91.16 23:14:01 all 19.93 0.00 6.34 3.68 0.08 69.97 23:14:01 0 23.72 0.00 7.23 1.00 0.08 67.97 23:14:01 1 23.01 0.00 6.31 1.00 0.10 69.57 23:14:01 2 20.50 0.00 5.56 2.47 0.07 71.40 23:14:01 3 21.83 0.00 6.23 1.19 0.08 70.67 23:14:01 4 20.24 0.00 6.30 2.53 0.07 70.86 23:14:01 5 17.51 0.00 5.82 3.23 0.09 73.36 23:14:01 6 15.90 0.00 7.47 14.22 0.10 62.31 23:14:01 7 16.71 0.00 5.77 3.81 0.07 73.64 23:15:01 all 22.78 0.00 2.02 0.29 0.07 74.84 23:15:01 0 23.76 0.00 1.94 0.03 0.07 74.20 23:15:01 1 19.19 0.00 1.86 0.02 0.05 78.89 23:15:01 2 23.80 0.00 2.49 0.02 0.07 73.63 23:15:01 3 21.62 0.00 1.79 0.15 0.08 76.36 23:15:01 4 27.29 0.00 2.16 0.02 0.07 70.47 23:15:01 5 20.35 0.00 1.56 0.02 0.07 78.01 23:15:01 6 30.03 0.00 2.59 0.23 0.07 67.07 23:15:01 7 16.18 0.00 1.84 1.86 0.07 80.06 23:16:01 all 1.48 0.00 0.16 1.11 0.04 97.20 23:16:01 0 0.63 0.00 0.12 0.00 0.02 99.23 23:16:01 1 1.37 0.00 0.20 0.12 0.05 98.26 23:16:01 2 0.95 0.00 0.15 0.00 0.03 98.86 23:16:01 3 1.98 0.00 0.28 0.03 0.10 97.61 23:16:01 4 0.90 0.00 0.08 0.00 0.03 98.98 23:16:01 5 3.11 0.00 0.17 0.00 0.07 96.66 23:16:01 6 1.47 0.00 0.12 0.12 0.03 98.26 23:16:01 7 1.45 0.00 0.20 8.61 0.03 89.70 23:17:01 all 3.39 0.00 0.62 1.36 0.04 94.58 23:17:01 0 2.66 0.00 0.70 0.03 0.05 96.56 23:17:01 1 1.24 0.00 0.64 0.35 0.03 97.74 23:17:01 2 2.36 0.00 0.67 0.15 0.05 96.78 23:17:01 3 1.81 0.00 0.55 0.12 0.03 97.49 23:17:01 4 12.75 0.00 0.67 0.38 0.05 86.15 23:17:01 5 3.09 0.00 0.50 0.25 0.03 96.13 23:17:01 6 1.92 0.00 0.65 1.32 0.03 96.07 23:17:01 7 1.27 0.00 0.64 8.27 0.03 89.79 Average: all 11.57 0.00 2.00 1.84 0.05 84.53 Average: 0 10.10 0.00 2.08 0.36 0.04 87.41 Average: 1 12.98 0.00 2.09 0.62 0.06 84.25 Average: 2 9.19 0.00 1.78 0.89 0.05 88.09 Average: 3 8.57 0.00 1.86 4.08 0.06 85.43 Average: 4 13.03 0.00 1.94 0.60 0.04 84.38 Average: 5 17.87 0.00 2.18 1.23 0.06 78.66 Average: 6 13.56 0.00 2.37 2.79 0.05 81.24 Average: 7 7.23 0.00 1.74 4.18 0.04 86.81