Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/136986 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-12237 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-verify-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-qlJsczZGnYZ4/agent.2564 SSH_AGENT_PID=2566 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_16388038635409657152.key (/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/private_key_16388038635409657152.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-verify-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git refs/changes/86/136986/4 # timeout=30 > git rev-parse f2e4da7e296548fb3980fd212e3a67dc83254e1d^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision f2e4da7e296548fb3980fd212e3a67dc83254e1d (refs/changes/86/136986/4) > git config core.sparsecheckout # timeout=10 > git checkout -f f2e4da7e296548fb3980fd212e3a67dc83254e1d # timeout=30 Commit message: "Add kafka support in Policy CSIT" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk b9d434aeef048c4ea2cf9bd8a27681d375ec5b85 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins8280005125366491228.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-uGlI lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-uGlI/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-uGlI/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.1 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.2 boto3==1.34.19 botocore==1.34.19 bs4==0.0.1 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.4.2 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.26.2 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.20.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.3 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==6.3.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.1.1 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.3.post1 PyYAML==6.0.1 referencing==0.32.1 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh /tmp/jenkins5108499025870347985.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/sh -xe /tmp/jenkins4693839879865529588.sh + /w/workspace/policy-pap-master-project-csit-verify-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-verify-pap + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.F8qRBjyZbf ++ echo ROBOT_VENV=/tmp/tmp.F8qRBjyZbf +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.F8qRBjyZbf ++ source /tmp/tmp.F8qRBjyZbf/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.F8qRBjyZbf +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin +++ PATH=/tmp/tmp.F8qRBjyZbf/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.F8qRBjyZbf) ' '!=' x ']' +++ PS1='(tmp.F8qRBjyZbf) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.2 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.F8qRBjyZbf/src/onap ++ rm -rf /tmp/tmp.F8qRBjyZbf/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.2 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.F8qRBjyZbf/bin/activate + '[' -z /tmp/tmp.F8qRBjyZbf/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.F8qRBjyZbf/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.F8qRBjyZbf ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ++ PATH=/tmp/tmp.F8qRBjyZbf/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-verify-pap/csit:/w/workspace/policy-pap-master-project-csit-verify-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.F8qRBjyZbf) ' ++ '[' 'x(tmp.F8qRBjyZbf) ' '!=' x ']' ++ PS1='(tmp.F8qRBjyZbf) (tmp.F8qRBjyZbf) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.qeeZixE4CM + cd /tmp/tmp.qeeZixE4CM + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-verify-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-verify-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-verify-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-verify-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-verify-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:a67e5e402ff5410b86ec48b39eab1a3c4df2a7e78a71bf025ec5e32e09090ad4 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:bd33796bedd5ad2337f9468b0cd9d04db279a1f831716e1c03cebe5a20ced20b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.0)... 3.1.0: Pulling from onap/policy-api Digest: sha256:5c4c03761af8683035bdfb23ad490044d6b151e5d5939a59b93a6064a761dbbd Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.0 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:37c4361d99c3f559835790653cd75fd194587e3e5951cbeb5086d1c0b8af6b74 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating simulator ... Creating compose_zookeeper_1 ... Creating mariadb ... Creating prometheus ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating simulator ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-verify-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds grafana Up 13 seconds kafka Up 12 seconds policy-api Up 17 seconds prometheus Up 14 seconds compose_zookeeper_1 Up 15 seconds mariadb Up 19 seconds simulator Up 16 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds grafana Up 18 seconds kafka Up 17 seconds policy-api Up 22 seconds prometheus Up 19 seconds compose_zookeeper_1 Up 20 seconds mariadb Up 24 seconds simulator Up 21 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds grafana Up 23 seconds kafka Up 22 seconds policy-api Up 27 seconds prometheus Up 24 seconds compose_zookeeper_1 Up 25 seconds mariadb Up 29 seconds simulator Up 26 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds grafana Up 28 seconds kafka Up 27 seconds policy-api Up 32 seconds prometheus Up 29 seconds compose_zookeeper_1 Up 30 seconds mariadb Up 34 seconds simulator Up 31 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds grafana Up 33 seconds kafka Up 32 seconds policy-api Up 37 seconds prometheus Up 34 seconds compose_zookeeper_1 Up 35 seconds mariadb Up 39 seconds simulator Up 36 seconds NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds grafana Up 38 seconds kafka Up 37 seconds policy-api Up 42 seconds prometheus Up 39 seconds compose_zookeeper_1 Up 40 seconds mariadb Up 44 seconds simulator Up 41 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + tee /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap/_sysinfo-1-after-setup.txt + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 18:50:28 up 1:13, 0 users, load average: 3.48, 1.64, 0.66 Tasks: 202 total, 1 running, 130 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.9 us, 0.2 sy, 0.0 ni, 98.0 id, 0.9 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.3G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 35 seconds policy-pap Up 36 seconds grafana Up 38 seconds kafka Up 37 seconds policy-api Up 43 seconds prometheus Up 39 seconds compose_zookeeper_1 Up 40 seconds mariadb Up 45 seconds simulator Up 42 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS cc05f2bdded4 policy-apex-pdp 123.77% 184.4MiB / 31.41GiB 0.57% 7.47kB / 7.18kB 0B / 0B 48 0f616df0dad2 policy-pap 3.14% 494.9MiB / 31.41GiB 1.54% 26.7kB / 29.2kB 0B / 181MB 61 fd889a8534a2 grafana 0.43% 51.52MiB / 31.41GiB 0.16% 18.8kB / 3.47kB 0B / 23.9MB 16 5fb227298fe3 kafka 23.90% 392.2MiB / 31.41GiB 1.22% 71.2kB / 71.9kB 0B / 508kB 81 d867129f0e2d policy-api 0.11% 500.8MiB / 31.41GiB 1.56% 999kB / 710kB 0B / 0B 55 362a492d9e1b prometheus 0.00% 19.14MiB / 31.41GiB 0.06% 1.28kB / 158B 0B / 0B 12 df2fa4b07319 compose_zookeeper_1 0.11% 98.83MiB / 31.41GiB 0.31% 56.1kB / 51.4kB 0B / 401kB 59 e43bbe00c40e mariadb 0.03% 101.7MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 68.9MB 37 8679c3ab0f72 simulator 0.31% 180.5MiB / 31.41GiB 0.56% 1.19kB / 0B 0B / 0B 93 + echo + cd /tmp/tmp.qeeZixE4CM + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + sed 's|^|/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/|' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-verify-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-verify-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | DEPLOYMENT != UNDEPLOYMENT ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | FAIL | 22 tests, 21 passed, 1 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | FAIL | 30 tests, 29 passed, 1 failed ============================================================================== Output: /tmp/tmp.qeeZixE4CM/output.xml Log: /tmp/tmp.qeeZixE4CM/log.html Report: /tmp/tmp.qeeZixE4CM/report.html + RESULT=1 + load_set + _setopts=hxB ++ tr : ' ' ++ echo braceexpand:hashall:interactive-comments:xtrace + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 1' RESULT: 1 + exit 1 + on_exit + rc=1 + [[ -n /w/workspace/policy-pap-master-project-csit-verify-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 18:52:18 up 1:15, 0 users, load average: 0.84, 1.32, 0.66 Tasks: 200 total, 1 running, 128 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.0 us, 0.2 sy, 0.0 ni, 97.9 id, 0.8 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.9G 22G 1.3M 6.3G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes prometheus Up 2 minutes compose_zookeeper_1 Up 2 minutes mariadb Up 2 minutes simulator Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS cc05f2bdded4 policy-apex-pdp 0.28% 183.4MiB / 31.41GiB 0.57% 57kB / 91.4kB 0B / 0B 50 0f616df0dad2 policy-pap 1.21% 547.6MiB / 31.41GiB 1.70% 2.33MB / 807kB 0B / 181MB 63 fd889a8534a2 grafana 0.20% 54.34MiB / 31.41GiB 0.17% 19.9kB / 4.58kB 0B / 23.9MB 16 5fb227298fe3 kafka 10.49% 395.3MiB / 31.41GiB 1.23% 242kB / 215kB 0B / 606kB 83 d867129f0e2d policy-api 0.11% 547.8MiB / 31.41GiB 1.70% 2.49MB / 1.26MB 0B / 0B 56 362a492d9e1b prometheus 0.00% 25.33MiB / 31.41GiB 0.08% 181kB / 11kB 0B / 0B 12 df2fa4b07319 compose_zookeeper_1 0.09% 96.78MiB / 31.41GiB 0.30% 59kB / 53kB 0B / 401kB 59 e43bbe00c40e mariadb 0.02% 103MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 69.2MB 28 8679c3ab0f72 simulator 3.35% 180.8MiB / 31.41GiB 0.56% 1.5kB / 0B 0B / 0B 92 + echo + source_safely /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-verify-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-verify-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-verify-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, grafana, kafka, policy-api, policy-db-migrator, prometheus, compose_zookeeper_1, mariadb, simulator zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-14 18:49:51,611] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,618] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,618] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,618] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,618] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,620] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-14 18:49:51,620] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-14 18:49:51,620] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-14 18:49:51,620] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-14 18:49:51,621] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-14 18:49:51,622] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,622] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,622] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,622] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,622] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-14 18:49:51,622] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-14 18:49:51,634] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-14 18:49:51,637] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-14 18:49:51,637] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-14 18:49:51,640] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-14 18:49:51,649] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,649] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,650] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:host.name=df2fa4b07319 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) grafana | logger=settings t=2024-01-14T18:49:50.318665677Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z grafana | logger=settings t=2024-01-14T18:49:50.319145213Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-01-14T18:49:50.319303939Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-01-14T18:49:50.319434523Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-01-14T18:49:50.319528767Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-01-14T18:49:50.319671552Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-14T18:49:50.319765705Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-14T18:49:50.319876129Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-01-14T18:49:50.320009313Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-01-14T18:49:50.320111437Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-01-14T18:49:50.32020187Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-14T18:49:50.320258782Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-14T18:49:50.320389967Z level=info msg=Target target=[all] grafana | logger=settings t=2024-01-14T18:49:50.320452579Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-01-14T18:49:50.320574173Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-01-14T18:49:50.320628955Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-01-14T18:49:50.320752779Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-01-14T18:49:50.320807681Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-01-14T18:49:50.320930295Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-01-14T18:49:50.321417242Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-01-14T18:49:50.321552277Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-01-14T18:49:50.322288183Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-01-14T18:49:50.323277597Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-01-14T18:49:50.324108266Z level=info msg="Migration successfully executed" id="create migration_log table" duration=830.369µs grafana | logger=migrator t=2024-01-14T18:49:50.327589247Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-01-14T18:49:50.328762968Z level=info msg="Migration successfully executed" id="create user table" duration=1.173121ms grafana | logger=migrator t=2024-01-14T18:49:50.33226773Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-01-14T18:49:50.33312426Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=860.71µs grafana | logger=migrator t=2024-01-14T18:49:50.336506128Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-01-14T18:49:50.337368508Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=862.28µs grafana | logger=migrator t=2024-01-14T18:49:50.349922164Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-01-14T18:49:50.35123001Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.307966ms grafana | logger=migrator t=2024-01-14T18:49:50.35466381Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-01-14T18:49:50.35581901Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.154751ms grafana | logger=migrator t=2024-01-14T18:49:50.360374458Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-01-14T18:49:50.363657323Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.282454ms grafana | logger=migrator t=2024-01-14T18:49:50.368875074Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-01-14T18:49:50.369866559Z level=info msg="Migration successfully executed" id="create user table v2" duration=991.055µs grafana | logger=migrator t=2024-01-14T18:49:50.373267577Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-01-14T18:49:50.374098686Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=829.299µs grafana | logger=migrator t=2024-01-14T18:49:50.376759929Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-01-14T18:49:50.377618029Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=857.979µs grafana | logger=migrator t=2024-01-14T18:49:50.383138611Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-01-14T18:49:50.383660889Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=522.058µs grafana | logger=migrator t=2024-01-14T18:49:50.386539789Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-01-14T18:49:50.387239883Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=699.694µs grafana | logger=migrator t=2024-01-14T18:49:50.392943562Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-01-14T18:49:50.394272848Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.331206ms grafana | logger=migrator t=2024-01-14T18:49:50.397969327Z level=info msg="Executing migration" id="Update user table charset" zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,651] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,652] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,652] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,652] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,652] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,652] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,652] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,652] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,653] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-14 18:49:51,654] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,654] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,654] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-14 18:49:51,655] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-14 18:49:51,655] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-14 18:49:51,655] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-14 18:49:51,655] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-14 18:49:51,656] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-14 18:49:51,656] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-14 18:49:51,656] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-14 18:49:51,658] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-14 18:49:51,658] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) kafka | ===> User grafana | logger=migrator t=2024-01-14T18:49:50.398233946Z level=info msg="Migration successfully executed" id="Update user table charset" duration=266.449µs zookeeper_1 | [2024-01-14 18:49:51,658] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) grafana | logger=migrator t=2024-01-14T18:49:50.404249526Z level=info msg="Executing migration" id="Add last_seen_at column to user" policy-apex-pdp | Waiting for mariadb port 3306... zookeeper_1 | [2024-01-14 18:49:51,658] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) kafka | ===> Configuring ... grafana | logger=migrator t=2024-01-14T18:49:50.406224504Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.970669ms policy-apex-pdp | mariadb (172.17.0.2:3306) open mariadb | 2024-01-14 18:49:43+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. policy-api | Waiting for mariadb port 3306... zookeeper_1 | [2024-01-14 18:49:51,658] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) kafka | Running in Zookeeper mode... grafana | logger=migrator t=2024-01-14T18:49:50.409039052Z level=info msg="Executing migration" id="Add missing user data" policy-db-migrator | Waiting for mariadb port 3306... policy-apex-pdp | Waiting for kafka port 9092... mariadb | 2024-01-14 18:49:43+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' policy-pap | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open prometheus | ts=2024-01-14T18:49:49.004Z caller=main.go:539 level=info msg="No time or size retention was set so using the default time retention" duration=15d zookeeper_1 | [2024-01-14 18:49:51,678] INFO Logging initialized @563ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json kafka | ===> Running preflight checks ... grafana | logger=migrator t=2024-01-14T18:49:50.409458497Z level=info msg="Migration successfully executed" id="Add missing user data" duration=419.935µs policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-apex-pdp | kafka (172.17.0.8:9092) open mariadb | 2024-01-14 18:49:43+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. policy-pap | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... prometheus | ts=2024-01-14T18:49:49.004Z caller=main.go:583 level=info msg="Starting Prometheus Server" mode=server version="(version=2.48.1, branch=HEAD, revision=63894216648f0d6be310c9d16fb48293c45c9310)" zookeeper_1 | [2024-01-14 18:49:51,754] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) kafka | ===> Check if /var/lib/kafka/data is writable ... grafana | logger=migrator t=2024-01-14T18:49:50.413416815Z level=info msg="Executing migration" id="Add is_disabled column to user" policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-apex-pdp | Waiting for pap port 6969... mariadb | 2024-01-14 18:49:44+00:00 [Note] [Entrypoint]: Initializing database files policy-pap | Waiting for kafka port 9092... policy-api | policy-db-migrator (172.17.0.6:6824) open prometheus | ts=2024-01-14T18:49:49.004Z caller=main.go:588 level=info build_context="(go=go1.21.5, platform=linux/amd64, user=root@71f108ff5632, date=20231208-23:33:22, tags=netgo,builtinassets,stringlabels)" zookeeper_1 | [2024-01-14 18:49:51,754] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) kafka | ===> Check if Zookeeper is healthy ... grafana | logger=migrator t=2024-01-14T18:49:50.415475596Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.058192ms policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused policy-apex-pdp | pap (172.17.0.10:6969) open mariadb | 2024-01-14 18:49:44 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-pap | kafka (172.17.0.8:9092) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml prometheus | ts=2024-01-14T18:49:49.004Z caller=main.go:589 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" zookeeper_1 | [2024-01-14 18:49:51,771] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) kafka | [2024-01-14 18:49:55,281] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-14T18:49:50.418851574Z level=info msg="Executing migration" id="Add index user.login/user.email" policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' mariadb | 2024-01-14 18:49:44 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-pap | Waiting for api port 6969... policy-api | prometheus | ts=2024-01-14T18:49:49.004Z caller=main.go:590 level=info fd_limits="(soft=1048576, hard=1048576)" zookeeper_1 | [2024-01-14 18:49:51,794] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) kafka | [2024-01-14 18:49:55,281] INFO Client environment:host.name=5fb227298fe3 (org.apache.zookeeper.ZooKeeper) policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-14T18:49:50.420202491Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.350557ms policy-apex-pdp | [2024-01-14T18:50:28.018+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] mariadb | 2024-01-14 18:49:44 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-pap | api (172.17.0.7:6969) open policy-api | . ____ _ __ _ _ prometheus | ts=2024-01-14T18:49:49.004Z caller=main.go:591 level=info vm_limits="(soft=unlimited, hard=unlimited)" zookeeper_1 | [2024-01-14 18:49:51,794] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) kafka | [2024-01-14 18:49:55,281] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused grafana | logger=migrator t=2024-01-14T18:49:50.428150737Z level=info msg="Executing migration" id="Add is_service_account column to user" policy-apex-pdp | [2024-01-14T18:50:28.176+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: mariadb | policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ prometheus | ts=2024-01-14T18:49:49.006Z caller=web.go:566 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-01-14T18:49:49.007Z caller=main.go:1024 level=info msg="Starting TSDB ..." simulator | overriding logback.xml kafka | [2024-01-14 18:49:55,282] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! grafana | logger=migrator t=2024-01-14T18:49:50.430291442Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.140335ms policy-apex-pdp | allow.auto.create.topics = true mariadb | policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ prometheus | ts=2024-01-14T18:49:49.008Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 simulator | 2024-01-14 18:49:47,249 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json kafka | [2024-01-14 18:49:55,282] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) policy-db-migrator | 321 blocks grafana | logger=migrator t=2024-01-14T18:49:50.434433066Z level=info msg="Executing migration" id="Update is_service_account column to nullable" policy-apex-pdp | auto.commit.interval.ms = 5000 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! policy-pap | policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) prometheus | ts=2024-01-14T18:49:49.008Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 simulator | 2024-01-14 18:49:47,339 INFO org.onap.policy.models.simulators starting kafka | [2024-01-14 18:49:55,282] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,795] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) policy-db-migrator | Preparing upgrade release version: 0800 grafana | logger=migrator t=2024-01-14T18:49:50.445634876Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.20269ms policy-apex-pdp | auto.include.jmx.reporter = true mariadb | To do so, start the server, then issue the following command: policy-pap | . ____ _ __ _ _ policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / prometheus | ts=2024-01-14T18:49:49.012Z caller=head.go:601 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" simulator | 2024-01-14 18:49:47,340 INFO org.onap.policy.models.simulators starting DMaaP provider kafka | [2024-01-14 18:49:55,282] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,798] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) policy-db-migrator | Preparing upgrade release version: 0900 grafana | logger=migrator t=2024-01-14T18:49:50.448532677Z level=info msg="Executing migration" id="create temp user table v1-7" policy-apex-pdp | auto.offset.reset = latest mariadb | policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | =========|_|==============|___/=/_/_/_/ prometheus | ts=2024-01-14T18:49:49.012Z caller=head.go:682 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=6.46µs simulator | 2024-01-14 18:49:47,341 INFO service manager starting kafka | [2024-01-14 18:49:55,282] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,806] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) policy-db-migrator | Preparing upgrade release version: 1000 grafana | logger=migrator t=2024-01-14T18:49:50.449133288Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=600.311µs policy-apex-pdp | bootstrap.servers = [kafka:9092] mariadb | '/usr/bin/mysql_secure_installation' policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | :: Spring Boot :: (v3.1.4) prometheus | ts=2024-01-14T18:49:49.012Z caller=head.go:690 level=info component=tsdb msg="Replaying WAL, this may take a while" simulator | 2024-01-14 18:49:47,341 INFO service manager starting Topic Sweeper kafka | [2024-01-14 18:49:55,282] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,816] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) policy-db-migrator | Preparing upgrade release version: 1100 grafana | logger=migrator t=2024-01-14T18:49:50.456274146Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" policy-apex-pdp | check.crcs = true mariadb | policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | prometheus | ts=2024-01-14T18:49:49.012Z caller=head.go:761 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 simulator | 2024-01-14 18:49:47,342 INFO service manager started kafka | [2024-01-14 18:49:55,282] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,816] INFO Started @702ms (org.eclipse.jetty.server.Server) policy-db-migrator | Preparing upgrade release version: 1200 grafana | logger=migrator t=2024-01-14T18:49:50.457683445Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.409089ms policy-apex-pdp | client.dns.lookup = use_all_dns_ips mariadb | which will also give you the option of removing the test policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | [2024-01-14T18:50:02.094+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 25 (/app/api.jar started by policy in /opt/app/policy/api/bin) prometheus | ts=2024-01-14T18:49:49.012Z caller=head.go:798 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=38.662µs wal_replay_duration=420.464µs wbl_replay_duration=180ns total_replay_duration=496.027µs simulator | 2024-01-14 18:49:47,342 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties kafka | [2024-01-14 18:49:55,282] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,816] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) grafana | logger=migrator t=2024-01-14T18:49:50.461575411Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-01-14T18:49:50.462890427Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.314866ms policy-pap | =========|_|==============|___/=/_/_/_/ policy-api | [2024-01-14T18:50:02.097+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" prometheus | ts=2024-01-14T18:49:49.015Z caller=main.go:1045 level=info fs_type=EXT4_SUPER_MAGIC simulator | 2024-01-14 18:49:47,568 INFO org.onap.policy.models.simulators starting DMaaP simulator kafka | [2024-01-14 18:49:55,282] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,819] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) policy-db-migrator | Preparing upgrade release version: 1300 policy-apex-pdp | client.id = consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-1 mariadb | databases and anonymous user created by default. This is grafana | logger=migrator t=2024-01-14T18:49:50.466475071Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" policy-pap | :: Spring Boot :: (v3.1.4) policy-api | [2024-01-14T18:50:03.844+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. prometheus | ts=2024-01-14T18:49:49.015Z caller=main.go:1048 level=info msg="TSDB started" simulator | 2024-01-14 18:49:47,677 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,STOPPED}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START kafka | [2024-01-14 18:49:55,282] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,820] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) policy-db-migrator | Done policy-apex-pdp | client.rack = mariadb | strongly recommended for production servers. grafana | logger=migrator t=2024-01-14T18:49:50.467317061Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=841.93µs policy-pap | policy-api | [2024-01-14T18:50:03.933+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 6 JPA repository interfaces. prometheus | ts=2024-01-14T18:49:49.015Z caller=main.go:1230 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml simulator | 2024-01-14 18:49:47,688 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,STOPPED}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-01-14 18:49:55,282] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,821] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-db-migrator | name version policy-apex-pdp | connections.max.idle.ms = 540000 mariadb | grafana | logger=migrator t=2024-01-14T18:49:50.472502011Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" policy-pap | [2024-01-14T18:50:17.132+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-api | [2024-01-14T18:50:04.358+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler prometheus | ts=2024-01-14T18:49:49.016Z caller=main.go:1267 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=954.702µs db_storage=1.01µs remote_storage=2.39µs web_handler=610ns query_engine=800ns scrape=194.916µs scrape_sd=119.034µs notify=39.402µs notify_sd=27.101µs rules=1.57µs tracing=5.1µs simulator | 2024-01-14 18:49:47,690 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,STOPPED}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=Thread[DMaaP simulator-3904,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-01-14 18:49:55,282] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,822] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) policy-db-migrator | policyadmin 0 policy-apex-pdp | default.api.timeout.ms = 60000 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb grafana | logger=migrator t=2024-01-14T18:49:50.473568168Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.064537ms policy-pap | [2024-01-14T18:50:17.134+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-01-14T18:50:04.358+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler prometheus | ts=2024-01-14T18:49:49.016Z caller=main.go:1009 level=info msg="Server is ready to receive web requests." simulator | 2024-01-14 18:49:47,698 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 kafka | [2024-01-14 18:49:55,282] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,850] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-apex-pdp | enable.auto.commit = true mariadb | grafana | logger=migrator t=2024-01-14T18:49:50.477205435Z level=info msg="Executing migration" id="Update temp_user table charset" policy-pap | [2024-01-14T18:50:18.997+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-01-14T18:50:05.010+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) prometheus | ts=2024-01-14T18:49:49.016Z caller=manager.go:1012 level=info component="rule manager" msg="Starting rule manager..." simulator | 2024-01-14 18:49:47,759 INFO Session workerName=node0 kafka | [2024-01-14 18:49:55,282] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,850] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) policy-db-migrator | upgrade: 0 -> 1300 policy-apex-pdp | exclude.internal.topics = true mariadb | Please report any problems at https://mariadb.org/jira grafana | logger=migrator t=2024-01-14T18:49:50.477395792Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=189.836µs policy-pap | [2024-01-14T18:50:19.105+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 96 ms. Found 7 JPA repository interfaces. policy-api | [2024-01-14T18:50:05.027+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] simulator | 2024-01-14 18:49:48,312 INFO Using GSON for REST calls kafka | [2024-01-14 18:49:55,282] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,852] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | policy-apex-pdp | fetch.max.bytes = 52428800 mariadb | grafana | logger=migrator t=2024-01-14T18:49:50.481003887Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" policy-pap | [2024-01-14T18:50:19.598+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-api | [2024-01-14T18:50:05.029+00:00|INFO|StandardService|main] Starting service [Tomcat] simulator | 2024-01-14 18:49:48,381 INFO Started o.e.j.s.ServletContextHandler@5974109{/,null,AVAILABLE} kafka | [2024-01-14 18:49:55,285] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@23a5fd2 (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,852] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-apex-pdp | fetch.max.wait.ms = 500 mariadb | The latest information about MariaDB is available at https://mariadb.org/. grafana | logger=migrator t=2024-01-14T18:49:50.481813185Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=810.178µs policy-pap | [2024-01-14T18:50:19.598+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-api | [2024-01-14T18:50:05.029+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] simulator | 2024-01-14 18:49:48,389 INFO Started DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904} kafka | [2024-01-14 18:49:55,288] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) zookeeper_1 | [2024-01-14 18:49:51,857] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) policy-db-migrator | -------------- policy-apex-pdp | fetch.min.bytes = 1 mariadb | grafana | logger=migrator t=2024-01-14T18:49:50.489176082Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" policy-pap | [2024-01-14T18:50:20.243+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-01-14T18:50:05.117+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext simulator | 2024-01-14 18:49:48,395 INFO Started Server@fd8294b{STARTING}[11.0.18,sto=0] @1629ms kafka | [2024-01-14 18:49:55,292] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) zookeeper_1 | [2024-01-14 18:49:51,857] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-apex-pdp | group.id = 4f5099d3-3717-42bb-ba40-fb39c13c7c61 mariadb | Consider joining MariaDB's strong and vibrant community: policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) grafana | logger=migrator t=2024-01-14T18:49:50.489917637Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=743.776µs policy-pap | [2024-01-14T18:50:20.255+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-14T18:50:05.117+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2954 ms simulator | 2024-01-14 18:49:48,395 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=DMaaP simulator, host=0.0.0.0, port=3904, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@fd8294b{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@5974109{/,null,AVAILABLE}, connector=DMaaP simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:3904}, jettyThread=Thread[DMaaP simulator-3904,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6821ea29==org.glassfish.jersey.servlet.ServletContainer@ad0b2472{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4295 ms. kafka | [2024-01-14 18:49:55,299] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-01-14 18:49:51,860] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) policy-apex-pdp | group.instance.id = null mariadb | https://mariadb.org/get-involved/ policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.493758721Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" policy-pap | [2024-01-14T18:50:20.259+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-01-14T18:50:05.565+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] simulator | 2024-01-14 18:49:48,400 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION kafka | [2024-01-14 18:49:55,314] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-01-14 18:49:51,860] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) policy-apex-pdp | heartbeat.interval.ms = 3000 mariadb | policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.494919742Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.16123ms policy-pap | [2024-01-14T18:50:20.260+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-01-14T18:50:05.645+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 simulator | 2024-01-14 18:49:48,401 INFO org.onap.policy.models.simulators starting A&AI simulator kafka | [2024-01-14 18:49:55,315] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-01-14 18:49:51,861] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) policy-apex-pdp | interceptor.classes = [] mariadb | 2024-01-14 18:49:45+00:00 [Note] [Entrypoint]: Database files initialized policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.498750575Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" policy-pap | [2024-01-14T18:50:20.363+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-01-14T18:50:05.649+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer simulator | 2024-01-14 18:49:48,405 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,STOPPED}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START kafka | [2024-01-14 18:49:55,324] INFO Socket connection established, initiating session, client: /172.17.0.8:51658, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-01-14 18:49:51,868] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) policy-apex-pdp | internal.leave.group.on.close = true mariadb | 2024-01-14 18:49:45+00:00 [Note] [Entrypoint]: Starting temporary server policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-14T18:49:50.49948392Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=733.375µs policy-pap | [2024-01-14T18:50:20.363+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3155 ms policy-api | [2024-01-14T18:50:05.694+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled simulator | 2024-01-14 18:49:48,405 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,STOPPED}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-01-14 18:49:55,362] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000042c7250000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-01-14 18:49:51,869] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false mariadb | 2024-01-14 18:49:45+00:00 [Note] [Entrypoint]: Waiting for server startup policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.506712412Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" policy-pap | [2024-01-14T18:50:20.776+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-01-14T18:50:06.047+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer simulator | 2024-01-14 18:49:48,406 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,STOPPED}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-01-14 18:49:55,496] INFO Session: 0x1000042c7250000 closed (org.apache.zookeeper.ZooKeeper) zookeeper_1 | [2024-01-14 18:49:51,881] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) policy-apex-pdp | isolation.level = read_uncommitted mariadb | 2024-01-14 18:49:45 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) grafana | logger=migrator t=2024-01-14T18:49:50.512487113Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.772461ms policy-pap | [2024-01-14T18:50:20.857+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-01-14T18:50:06.069+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... simulator | 2024-01-14 18:49:48,407 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 kafka | [2024-01-14 18:49:55,496] INFO EventThread shut down for session: 0x1000042c7250000 (org.apache.zookeeper.ClientCnxn) zookeeper_1 | [2024-01-14 18:49:51,882] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer mariadb | 2024-01-14 18:49:45 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.515825829Z level=info msg="Executing migration" id="create temp_user v2" policy-pap | [2024-01-14T18:50:20.860+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-01-14T18:50:06.164+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 simulator | 2024-01-14 18:49:48,420 INFO Session workerName=node0 kafka | Using log4j config /etc/kafka/log4j.properties zookeeper_1 | [2024-01-14 18:49:55,340] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) policy-apex-pdp | max.partition.fetch.bytes = 1048576 mariadb | 2024-01-14 18:49:45 0 [Note] InnoDB: Number of transaction pools: 1 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.516437301Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=611.161µs policy-pap | [2024-01-14T18:50:20.909+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-01-14T18:50:06.167+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. simulator | 2024-01-14 18:49:48,502 INFO Using GSON for REST calls kafka | ===> Launching ... policy-apex-pdp | max.poll.interval.ms = 300000 mariadb | 2024-01-14 18:49:45 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.519531208Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" policy-pap | [2024-01-14T18:50:21.291+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-01-14T18:50:06.196+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) kafka | ===> Launching kafka ... simulator | 2024-01-14 18:49:48,517 INFO Started o.e.j.s.ServletContextHandler@435871cb{/,null,AVAILABLE} policy-apex-pdp | max.poll.records = 500 mariadb | 2024-01-14 18:49:45 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql grafana | logger=migrator t=2024-01-14T18:49:50.52015719Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=625.872µs policy-pap | [2024-01-14T18:50:21.311+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-01-14T18:50:06.197+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead kafka | [2024-01-14 18:49:56,149] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) simulator | 2024-01-14 18:49:48,518 INFO Started A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} policy-apex-pdp | metadata.max.age.ms = 300000 mariadb | 2024-01-14 18:49:45 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.529693642Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" policy-pap | [2024-01-14T18:50:21.433+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@4ee6291f policy-api | [2024-01-14T18:50:08.054+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) kafka | [2024-01-14 18:49:56,453] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) simulator | 2024-01-14 18:49:48,518 INFO Started Server@4fc5e095{STARTING}[11.0.18,sto=0] @1752ms policy-apex-pdp | metric.reporters = [] mariadb | 2024-01-14 18:49:45 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) grafana | logger=migrator t=2024-01-14T18:49:50.531140412Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.44186ms policy-pap | [2024-01-14T18:50:21.435+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-01-14T18:50:08.057+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' kafka | [2024-01-14 18:49:56,516] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) simulator | 2024-01-14 18:49:48,518 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4fc5e095{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@435871cb{/,null,AVAILABLE}, connector=A&AI simulator@6b9ce1bf{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-75ed9710==org.glassfish.jersey.servlet.ServletContainer@cffad7c8{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4888 ms. policy-apex-pdp | metrics.num.samples = 2 mariadb | 2024-01-14 18:49:45 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.535601468Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" policy-pap | [2024-01-14T18:50:21.465+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-01-14T18:50:09.274+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml kafka | [2024-01-14 18:49:56,518] INFO starting (kafka.server.KafkaServer) simulator | 2024-01-14 18:49:48,519 INFO org.onap.policy.models.simulators starting SDNC simulator policy-apex-pdp | metrics.recording.level = INFO mariadb | 2024-01-14 18:49:45 0 [Note] InnoDB: Completed initialization of buffer pool policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.536471008Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=870.691µs policy-pap | [2024-01-14T18:50:21.466+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-01-14T18:50:10.162+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] kafka | [2024-01-14 18:49:56,518] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) simulator | 2024-01-14 18:49:48,521 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,STOPPED}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | metrics.sample.window.ms = 30000 mariadb | 2024-01-14 18:49:45 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.539583606Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" policy-pap | [2024-01-14T18:50:23.388+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-01-14T18:50:11.359+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning kafka | [2024-01-14 18:49:56,533] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) simulator | 2024-01-14 18:49:48,521 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,STOPPED}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] mariadb | 2024-01-14 18:49:46 0 [Note] InnoDB: 128 rollback segments are active. policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql grafana | logger=migrator t=2024-01-14T18:49:50.540456947Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=872.771µs policy-pap | [2024-01-14T18:50:23.391+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-01-14T18:50:11.544+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@607c7f58, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4bbb00a4, org.springframework.security.web.context.SecurityContextHolderFilter@6e11d059, org.springframework.security.web.header.HeaderWriterFilter@1d123972, org.springframework.security.web.authentication.logout.LogoutFilter@54e1e8a7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@206d4413, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@19bd1f98, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@69cf9acb, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@543d242e, org.springframework.security.web.access.ExceptionTranslationFilter@5b3063b7, org.springframework.security.web.access.intercept.AuthorizationFilter@407bfc49] simulator | 2024-01-14 18:49:48,522 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,STOPPED}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING kafka | [2024-01-14 18:49:56,538] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:host.name=5fb227298fe3 (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.54401519Z level=info msg="Executing migration" id="copy temp_user v1 to v2" policy-pap | [2024-01-14T18:50:23.988+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-api | [2024-01-14T18:50:12.403+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' simulator | 2024-01-14 18:49:48,523 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 kafka | [2024-01-14 18:49:56,538] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) grafana | logger=migrator t=2024-01-14T18:49:50.544512738Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=497.288µs policy-pap | [2024-01-14T18:50:24.554+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-api | [2024-01-14T18:50:12.490+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] simulator | 2024-01-14 18:49:48,537 INFO Session workerName=node0 kafka | [2024-01-14 18:49:56,538] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Note] InnoDB: log sequence number 46590; transaction id 14 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.55004838Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" policy-pap | [2024-01-14T18:50:24.658+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-api | [2024-01-14T18:50:12.516+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' simulator | 2024-01-14 18:49:48,582 INFO Using GSON for REST calls kafka | [2024-01-14 18:49:56,538] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Note] Plugin 'FEEDBACK' is disabled. policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.550698503Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=649.853µs policy-pap | [2024-01-14T18:50:24.922+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-api | [2024-01-14T18:50:12.537+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.198 seconds (process running for 11.833) simulator | 2024-01-14 18:49:48,590 INFO Started o.e.j.s.ServletContextHandler@62ea3440{/,null,AVAILABLE} kafka | [2024-01-14 18:49:56,538] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.553967827Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" policy-pap | allow.auto.create.topics = true policy-api | [2024-01-14T18:50:31.812+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' simulator | 2024-01-14 18:49:48,591 INFO Started SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} kafka | [2024-01-14 18:49:56,538] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql grafana | logger=migrator t=2024-01-14T18:49:50.554690072Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=721.675µs policy-pap | auto.commit.interval.ms = 5000 policy-api | [2024-01-14T18:50:31.812+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' simulator | 2024-01-14 18:49:48,591 INFO Started Server@4bef0fe3{STARTING}[11.0.18,sto=0] @1825ms kafka | [2024-01-14 18:49:56,538] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.558182944Z level=info msg="Executing migration" id="create star table" policy-pap | auto.include.jmx.reporter = true policy-api | [2024-01-14T18:50:31.814+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms simulator | 2024-01-14 18:49:48,591 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@4bef0fe3{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@62ea3440{/,null,AVAILABLE}, connector=SDNC simulator@79da1ec0{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-192d74fb==org.glassfish.jersey.servlet.ServletContainer@dd44a281{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4931 ms. kafka | [2024-01-14 18:49:56,538] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-14 18:49:56,538] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) mariadb | 2024-01-14 18:49:46 0 [Note] mariadbd: ready for connections. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) grafana | logger=migrator t=2024-01-14T18:49:50.559294742Z level=info msg="Migration successfully executed" id="create star table" duration=1.111569ms policy-pap | auto.offset.reset = latest policy-api | [2024-01-14T18:50:32.107+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: simulator | 2024-01-14 18:49:48,593 INFO org.onap.policy.models.simulators starting SO simulator policy-apex-pdp | receive.buffer.bytes = 65536 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution policy-db-migrator | -------------- kafka | [2024-01-14 18:49:56,538] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-01-14T18:49:50.566762572Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" policy-pap | bootstrap.servers = [kafka:9092] policy-api | [] simulator | 2024-01-14 18:49:48,598 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,STOPPED}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START policy-apex-pdp | reconnect.backoff.max.ms = 1000 mariadb | 2024-01-14 18:49:46+00:00 [Note] [Entrypoint]: Temporary server started. policy-db-migrator | kafka | [2024-01-14 18:49:56,538] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-01-14T18:49:50.567725736Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=962.964µs policy-pap | check.crcs = true simulator | 2024-01-14 18:49:48,600 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,STOPPED}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | reconnect.backoff.ms = 50 mariadb | 2024-01-14 18:49:48+00:00 [Note] [Entrypoint]: Creating user policy_user policy-db-migrator | kafka | [2024-01-14 18:49:56,540] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) grafana | logger=migrator t=2024-01-14T18:49:50.572749601Z level=info msg="Executing migration" id="create org table v1" policy-pap | client.dns.lookup = use_all_dns_ips simulator | 2024-01-14 18:49:48,601 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,STOPPED}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | request.timeout.ms = 30000 mariadb | 2024-01-14 18:49:48+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql kafka | [2024-01-14 18:49:56,544] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) grafana | logger=migrator t=2024-01-14T18:49:50.574001364Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.251703ms policy-pap | client.id = consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-1 simulator | 2024-01-14 18:49:48,602 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 policy-apex-pdp | retry.backoff.ms = 100 mariadb | policy-db-migrator | -------------- kafka | [2024-01-14 18:49:56,549] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) grafana | logger=migrator t=2024-01-14T18:49:50.582143218Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-pap | client.rack = simulator | 2024-01-14 18:49:48,613 INFO Session workerName=node0 policy-apex-pdp | sasl.client.callback.handler.class = null mariadb | 2024-01-14 18:49:48+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) kafka | [2024-01-14 18:49:56,551] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) grafana | logger=migrator t=2024-01-14T18:49:50.583583608Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.438801ms policy-pap | connections.max.idle.ms = 540000 simulator | 2024-01-14 18:49:48,660 INFO Using GSON for REST calls policy-apex-pdp | sasl.jaas.config = null mariadb | policy-db-migrator | -------------- kafka | [2024-01-14 18:49:56,555] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-14T18:49:50.587263866Z level=info msg="Executing migration" id="create org_user table v1" policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit simulator | 2024-01-14 18:49:48,670 INFO Started o.e.j.s.ServletContextHandler@666b83a4{/,null,AVAILABLE} mariadb | 2024-01-14 18:49:48+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh policy-db-migrator | kafka | [2024-01-14 18:49:56,562] INFO Socket connection established, initiating session, client: /172.17.0.8:36126, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-01-14T18:49:50.588006962Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=742.536µs policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 simulator | 2024-01-14 18:49:48,671 INFO Started SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} mariadb | #!/bin/bash -xv policy-db-migrator | kafka | [2024-01-14 18:49:56,602] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000042c7250001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-01-14T18:49:50.592918043Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" policy-apex-pdp | sasl.kerberos.service.name = null simulator | 2024-01-14 18:49:48,671 INFO Started Server@5bfa8cc5{STARTING}[11.0.18,sto=0] @1905ms mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql kafka | [2024-01-14 18:49:56,608] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-01-14T18:49:50.593767882Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=849.789µs policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 simulator | 2024-01-14 18:49:48,672 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@5bfa8cc5{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@666b83a4{/,null,AVAILABLE}, connector=SO simulator@556d0826{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-55b5f5d2==org.glassfish.jersey.servlet.ServletContainer@f89816de{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4929 ms. mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. policy-db-migrator | -------------- kafka | [2024-01-14 18:49:58,444] INFO Cluster ID = 0Gs_niWkQtyT_H8dS3neSw (kafka.server.KafkaServer) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-14T18:49:50.597554304Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 simulator | 2024-01-14 18:49:48,672 INFO org.onap.policy.models.simulators starting VFC simulator mariadb | # policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-14 18:49:58,448] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-14T18:49:50.598402614Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=848.129µs policy-apex-pdp | sasl.login.callback.handler.class = null simulator | 2024-01-14 18:49:48,674 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,STOPPED}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); policy-db-migrator | -------------- kafka | [2024-01-14 18:49:58,494] INFO KafkaConfig values: policy-pap | group.id = 9f04366a-9b2f-4312-96e1-33019febbf8b grafana | logger=migrator t=2024-01-14T18:49:50.601427649Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" policy-apex-pdp | sasl.login.class = null simulator | 2024-01-14 18:49:48,674 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,STOPPED}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | # you may not use this file except in compliance with the License. policy-db-migrator | kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-14T18:49:50.602260828Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=833.279µs policy-apex-pdp | sasl.login.connect.timeout.ms = null simulator | 2024-01-14 18:49:48,675 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,STOPPED}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING mariadb | # You may obtain a copy of the License at policy-db-migrator | kafka | alter.config.policy.class.name = null policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-14T18:49:50.605006523Z level=info msg="Executing migration" id="Update org table charset" policy-apex-pdp | sasl.login.read.timeout.ms = null simulator | 2024-01-14 18:49:48,676 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 mariadb | # policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql kafka | alter.log.dirs.replication.quota.window.num = 11 policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-14T18:49:50.605113517Z level=info msg="Migration successfully executed" id="Update org table charset" duration=106.334µs policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 simulator | 2024-01-14 18:49:48,683 INFO Session workerName=node0 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 policy-db-migrator | -------------- kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-14T18:49:50.610282197Z level=info msg="Executing migration" id="Update org_user table charset" policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 simulator | 2024-01-14 18:49:48,723 INFO Using GSON for REST calls mariadb | # policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) kafka | authorizer.class.name = policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-14T18:49:50.610382221Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=94.683µs policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 simulator | 2024-01-14 18:49:48,732 INFO Started o.e.j.s.ServletContextHandler@4a3329b9{/,null,AVAILABLE} mariadb | # Unless required by applicable law or agreed to in writing, software policy-db-migrator | -------------- kafka | auto.create.topics.enable = true policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-14T18:49:50.614765653Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 simulator | 2024-01-14 18:49:48,733 INFO Started VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} mariadb | # distributed under the License is distributed on an "AS IS" BASIS, policy-db-migrator | kafka | auto.include.jmx.reporter = true policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-14T18:49:50.615048593Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=282.83µs policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 simulator | 2024-01-14 18:49:48,733 INFO Started Server@2b95e48b{STARTING}[11.0.18,sto=0] @1967ms simulator | 2024-01-14 18:49:48,737 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2b95e48b{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4a3329b9{/,null,AVAILABLE}, connector=VFC simulator@efde75f{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3b220bcb==org.glassfish.jersey.servlet.ServletContainer@2323602e{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4938 ms. policy-db-migrator | kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 grafana | logger=migrator t=2024-01-14T18:49:50.618569566Z level=info msg="Executing migration" id="create dashboard table" policy-apex-pdp | sasl.login.retry.backoff.ms = 100 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. simulator | 2024-01-14 18:49:48,738 INFO org.onap.policy.models.simulators starting Sink appc-cl policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql kafka | broker.heartbeat.interval.ms = 2000 policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-14T18:49:50.619294461Z level=info msg="Migration successfully executed" id="create dashboard table" duration=724.366µs policy-apex-pdp | sasl.mechanism = GSSAPI mariadb | # See the License for the specific language governing permissions and simulator | 2024-01-14 18:49:48,761 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=77f3f7c6-ebf8-422f-ba62-504176ae6317, alive=false, publisher=null]]: starting policy-db-migrator | -------------- kafka | broker.id = 1 policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:50.625747745Z level=info msg="Executing migration" id="add index dashboard.account_id" policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 mariadb | # limitations under the License. simulator | 2024-01-14 18:49:49,045 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=77f3f7c6-ebf8-422f-ba62-504176ae6317, alive=false, publisher=CambriaPublisherWrapper []]]: DMAAP SINK created policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | broker.id.generation.enable = true policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-14T18:49:50.627053301Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.305836ms policy-apex-pdp | sasl.oauthbearer.expected.audience = null mariadb | simulator | 2024-01-14 18:49:49,046 INFO org.onap.policy.models.simulators starting Sink appc-lcm-write policy-db-migrator | -------------- kafka | broker.rack = null policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:50.634986887Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" policy-apex-pdp | sasl.oauthbearer.expected.issuer = null mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-01-14 18:49:49,046 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=ad190585-d09c-48fa-bcaf-590e201c6ab8, alive=false, publisher=null]]: starting policy-db-migrator | kafka | broker.session.timeout.ms = 9000 policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-14T18:49:50.636402846Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.415559ms policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 mariadb | do simulator | 2024-01-14 18:49:49,047 INFO InlineDmaapTopicSink [userName=null, password=null, getTopicCommInfrastructure()=DMAAP, toString()=InlineBusTopicSink [partitionId=ad190585-d09c-48fa-bcaf-590e201c6ab8, alive=false, publisher=CambriaPublisherWrapper []]]: DMAAP SINK created policy-db-migrator | kafka | client.quota.callback.class = null policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-14T18:49:50.640437117Z level=info msg="Executing migration" id="create dashboard_tag table" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" simulator | 2024-01-14 18:49:49,047 INFO org.onap.policy.models.simulators starting Source appc-cl policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql kafka | compression.type = producer policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-14T18:49:50.641245315Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=807.548µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" simulator | 2024-01-14 18:49:49,056 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED policy-db-migrator | -------------- kafka | connection.failed.authentication.delay.ms = 100 policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:50.644948844Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null mariadb | done simulator | 2024-01-14 18:49:49,072 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | connections.max.idle.ms = 600000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-01-14T18:49:50.646516338Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.572425ms policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' simulator | 2024-01-14 18:49:49,072 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED kafka | connections.max.reauth.ms = 0 policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-14T18:49:50.652500137Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' simulator | 2024-01-14 18:49:49,073 INFO org.onap.policy.models.simulators starting Source appc-lcm-read kafka | control.plane.listener.name = null policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-14T18:49:50.653646206Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.1457ms policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-01-14 18:49:49,073 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=024f82f7-ffad-4d64-9c13-138cc02c72df, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED kafka | controlled.shutdown.enable = true policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-14T18:49:50.65661935Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" policy-apex-pdp | security.protocol = PLAINTEXT policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' simulator | 2024-01-14 18:49:49,074 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=024f82f7-ffad-4d64-9c13-138cc02c72df, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting kafka | controlled.shutdown.max.retries = 3 policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:50.663202869Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.582739ms policy-apex-pdp | security.providers = null policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' simulator | 2024-01-14 18:49:49,074 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=024f82f7-ffad-4d64-9c13-138cc02c72df, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: INITTED kafka | controlled.shutdown.retry.backoff.ms = 5000 policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:50.666581537Z level=info msg="Executing migration" id="create dashboard v2" policy-apex-pdp | send.buffer.bytes = 131072 policy-db-migrator | -------------- mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp simulator | 2024-01-14 18:49:49,074 INFO org.onap.policy.models.simulators starting APPC Legacy simulator kafka | controller.listener.names = null policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:50.667327993Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=746.376µs policy-apex-pdp | session.timeout.ms = 45000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' simulator | 2024-01-14 18:49:49,077 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator kafka | controller.quorum.append.linger.ms = 25 policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-14T18:49:50.672613887Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 simulator | 2024-01-14 18:49:49,077 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-cl,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.simulators.AppcLegacyTopicServer@1c4ee95c kafka | controller.quorum.election.backoff.max.ms = 1000 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-14T18:49:50.673808118Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.192551ms policy-db-migrator | mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 simulator | 2024-01-14 18:49:49,077 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-cl,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-cl, effectiveTopic=appc-cl, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted kafka | controller.quorum.election.timeout.ms = 1000 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-14T18:49:50.678000374Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' policy-apex-pdp | ssl.cipher.suites = null simulator | 2024-01-14 18:49:49,078 INFO org.onap.policy.models.simulators starting appc-lcm-simulator kafka | controller.quorum.fetch.timeout.ms = 2000 policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-14T18:49:50.679441424Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.44453ms policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] simulator | 2024-01-14 18:49:49,076 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator kafka | controller.quorum.request.timeout.ms = 2000 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-14T18:49:50.68708613Z level=info msg="Executing migration" id="copy dashboard v1 to v2" policy-db-migrator | -------------- mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | ssl.endpoint.identification.algorithm = https simulator | 2024-01-14 18:49:49,079 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=024f82f7-ffad-4d64-9c13-138cc02c72df, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-lcm-read,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.simulators.AppcLcmTopicServer@49bd54f7 kafka | controller.quorum.retry.backoff.ms = 20 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-14T18:49:50.687513885Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=427.195µs policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' policy-apex-pdp | ssl.engine.factory.class = null simulator | 2024-01-14 18:49:49,079 INFO SingleThreadedDmaapTopicSource [userName=null, password=-, getTopicCommInfrastructure()=DMAAP, toString()=SingleThreadedBusTopicSource [consumerGroup=024f82f7-ffad-4d64-9c13-138cc02c72df, consumerInstance=simulator, fetchTimeout=-1, fetchLimit=-1, consumer=CambriaConsumerWrapper [fetchTimeout=-1], alive=true, locked=false, uebThread=Thread[DMAAP-source-appc-lcm-read,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=some-key, apiSecret=some-secret, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[simulator], topic=appc-lcm-read, effectiveTopic=appc-lcm-read, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted kafka | controller.quorum.voters = [] policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:50.690853832Z level=info msg="Executing migration" id="drop table dashboard_v1" policy-db-migrator | -------------- mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | ssl.key.password = null simulator | 2024-01-14 18:49:49,079 INFO org.onap.policy.models.simulators started kafka | controller.quota.window.num = 11 policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-14T18:49:50.692415626Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.561625ms policy-db-migrator | mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp policy-apex-pdp | ssl.keymanager.algorithm = SunX509 simulator | 2024-01-14 18:49:49,097 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. kafka | controller.quota.window.size.seconds = 1 policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:50.698060072Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" policy-db-migrator | mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' policy-apex-pdp | ssl.keystore.certificate.chain = null simulator | 2024-01-14 18:49:49,097 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. kafka | controller.socket.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:50.69826608Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=207.848µs policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-pap | sasl.login.read.timeout.ms = null mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' policy-apex-pdp | ssl.keystore.key = null simulator | 2024-01-14 18:49:49,097 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... kafka | create.topic.policy.class.name = null grafana | logger=migrator t=2024-01-14T18:49:50.702343832Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 mariadb | policy-apex-pdp | ssl.keystore.location = null simulator | 2024-01-14 18:49:49,097 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... kafka | default.replication.factor = 1 grafana | logger=migrator t=2024-01-14T18:49:50.704269109Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.919047ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | ssl.keystore.password = null simulator | 2024-01-14 18:49:49,205 INFO Topic appc-cl: added mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" kafka | delegation.token.expiry.check.interval.ms = 3600000 grafana | logger=migrator t=2024-01-14T18:49:50.709303134Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | ssl.keystore.type = JKS simulator | 2024-01-14 18:49:49,205 INFO Topic appc-lcm-read: added mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' kafka | delegation.token.expiry.time.ms = 86400000 grafana | logger=migrator t=2024-01-14T18:49:50.71121546Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.911796ms policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | ssl.protocol = TLSv1.3 simulator | 2024-01-14 18:49:49,207 INFO Topic appc-cl: add consumer group: dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql kafka | delegation.token.master.key = null grafana | logger=migrator t=2024-01-14T18:49:50.71407036Z level=info msg="Executing migration" id="Add column gnetId in dashboard" policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | ssl.provider = null simulator | 2024-01-14 18:49:49,207 INFO Topic appc-lcm-read: add consumer group: 024f82f7-ffad-4d64-9c13-138cc02c72df mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp kafka | delegation.token.max.lifetime.ms = 604800000 grafana | logger=migrator t=2024-01-14T18:49:50.715903884Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.830703ms policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-pap | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | ssl.secure.random.implementation = null simulator | 2024-01-14 18:50:04,230 INFO --> HTTP/1.1 200 OK mariadb | kafka | delegation.token.secret.key = null grafana | logger=migrator t=2024-01-14T18:49:50.719806659Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" policy-db-migrator | -------------- policy-pap | sasl.mechanism = GSSAPI policy-apex-pdp | ssl.trustmanager.algorithm = PKIX simulator | 2024-01-14 18:50:04,230 INFO --> HTTP/1.1 200 OK mariadb | 2024-01-14 18:49:49+00:00 [Note] [Entrypoint]: Stopping temporary server kafka | delete.records.purgatory.purge.interval.requests = 1 grafana | logger=migrator t=2024-01-14T18:49:50.720831745Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.021196ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | ssl.truststore.certificates = null simulator | 2024-01-14 18:50:04,235 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator mariadb | 2024-01-14 18:49:49 0 [Note] mariadbd (initiated by: unknown): Normal shutdown kafka | delete.topic.enable = true grafana | logger=migrator t=2024-01-14T18:49:50.72613209Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.expected.audience = null policy-apex-pdp | ssl.truststore.location = null simulator | 2024-01-14 18:50:04,235 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: FTS optimize thread exiting. kafka | early.start.listeners = null grafana | logger=migrator t=2024-01-14T18:49:50.729357622Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.224813ms policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | ssl.truststore.password = null simulator | 2024-01-14 18:50:04,236 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Starting shutdown... kafka | fetch.max.bytes = 57671680 grafana | logger=migrator t=2024-01-14T18:49:50.733226337Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" policy-db-migrator | policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | ssl.truststore.type = JKS simulator | 2024-01-14 18:50:04,236 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool kafka | fetch.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-01-14T18:49:50.734137358Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=910.962µs policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer simulator | 2024-01-14 18:50:04,236 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Buffer pool(s) dump completed at 240114 18:49:49 kafka | group.consumer.assignors = [] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.738901984Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | simulator | 2024-01-14 18:50:04,236 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" kafka | group.consumer.heartbeat.interval.ms = 5000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-14T18:49:50.739790065Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=888.031µs policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | [2024-01-14T18:50:28.316+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 simulator | 2024-01-14 18:50:04,237 INFO 172.17.0.3 - - [14/Jan/2024:18:49:49 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Shutdown completed; log sequence number 340437; transaction id 298 kafka | group.consumer.max.heartbeat.interval.ms = 15000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.744123246Z level=info msg="Executing migration" id="Update dashboard table charset" policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | [2024-01-14T18:50:28.317+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a simulator | 2024-01-14 18:50:04,238 INFO 172.17.0.3 - - [14/Jan/2024:18:49:49 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:49 0 [Note] mariadbd: Shutdown complete kafka | group.consumer.max.session.timeout.ms = 60000 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.744265441Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=142.145µs policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | [2024-01-14T18:50:28.317+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258228315 simulator | 2024-01-14 18:50:19,244 INFO --> HTTP/1.1 200 OK mariadb | kafka | group.consumer.max.size = 2147483647 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.747847345Z level=info msg="Executing migration" id="Update dashboard_tag table charset" policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | [2024-01-14T18:50:28.319+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-1, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Subscribed to topic(s): policy-pdp-pap simulator | 2024-01-14 18:50:19,244 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator mariadb | 2024-01-14 18:49:49+00:00 [Note] [Entrypoint]: Temporary server stopped kafka | group.consumer.min.heartbeat.interval.ms = 5000 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql grafana | logger=migrator t=2024-01-14T18:49:50.747993841Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=146.395µs policy-pap | security.protocol = PLAINTEXT policy-apex-pdp | [2024-01-14T18:50:28.330+00:00|INFO|ServiceManager|main] service manager starting simulator | 2024-01-14 18:50:19,244 INFO 172.17.0.3 - - [14/Jan/2024:18:50:04 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | kafka | group.consumer.min.session.timeout.ms = 45000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.751337637Z level=info msg="Executing migration" id="Add column folder_id in dashboard" policy-pap | security.providers = null policy-apex-pdp | [2024-01-14T18:50:28.330+00:00|INFO|ServiceManager|main] service manager starting topics simulator | 2024-01-14 18:50:19,244 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:49+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. kafka | group.consumer.session.timeout.ms = 45000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-14T18:49:50.753369768Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.037161ms policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | [2024-01-14T18:50:28.336+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4f5099d3-3717-42bb-ba40-fb39c13c7c61, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting simulator | 2024-01-14 18:50:19,244 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... mariadb | kafka | group.coordinator.new.enable = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:50.760784636Z level=info msg="Executing migration" id="Add column isFolder in dashboard" policy-pap | session.timeout.ms = 45000 policy-apex-pdp | [2024-01-14T18:50:28.358+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: simulator | 2024-01-14 18:50:19,245 INFO --> HTTP/1.1 200 OK mariadb | 2024-01-14 18:49:49 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... kafka | group.coordinator.threads = 1 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.762676992Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.893146ms policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | allow.auto.create.topics = true simulator | 2024-01-14 18:50:19,245 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 kafka | group.initial.rebalance.delay.ms = 3000 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:50.767683396Z level=info msg="Executing migration" id="Add column has_acl in dashboard" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | auto.commit.interval.ms = 5000 simulator | 2024-01-14 18:50:19,246 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Number of transaction pools: 1 kafka | group.max.session.timeout.ms = 1800000 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-pap | ssl.cipher.suites = null simulator | 2024-01-14 18:50:19,246 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions kafka | group.max.size = 2147483647 policy-db-migrator | -------------- policy-apex-pdp | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-01-14T18:49:50.769712427Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.02858ms policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] simulator | 2024-01-14 18:50:19,248 INFO 172.17.0.3 - - [14/Jan/2024:18:50:04 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:49 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) kafka | group.min.session.timeout.ms = 6000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | auto.offset.reset = latest grafana | logger=migrator t=2024-01-14T18:49:50.774924758Z level=info msg="Executing migration" id="Add column uid in dashboard" policy-pap | ssl.endpoint.identification.algorithm = https simulator | 2024-01-14 18:50:34,251 INFO 172.17.0.3 - - [14/Jan/2024:18:50:19 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:49 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) kafka | initial.broker.registration.timeout.ms = 60000 policy-db-migrator | -------------- policy-apex-pdp | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-01-14T18:49:50.776963899Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.046731ms policy-pap | ssl.engine.factory.class = null simulator | 2024-01-14 18:50:34,252 INFO --> HTTP/1.1 200 OK mariadb | 2024-01-14 18:49:49 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF kafka | inter.broker.listener.name = PLAINTEXT policy-db-migrator | policy-apex-pdp | check.crcs = true grafana | logger=migrator t=2024-01-14T18:49:50.782663447Z level=info msg="Executing migration" id="Update uid column values in dashboard" policy-pap | ssl.key.password = null simulator | 2024-01-14 18:50:34,253 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB kafka | inter.broker.protocol.version = 3.5-IV2 policy-db-migrator | policy-apex-pdp | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-01-14T18:49:50.782962038Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=298.401µs policy-pap | ssl.keymanager.algorithm = SunX509 simulator | 2024-01-14 18:50:34,254 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:49 0 [Note] InnoDB: Completed initialization of buffer pool kafka | kafka.metrics.polling.interval.secs = 10 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-apex-pdp | client.id = consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2 grafana | logger=migrator t=2024-01-14T18:49:50.787729154Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" policy-pap | ssl.keystore.certificate.chain = null simulator | 2024-01-14 18:50:34,254 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... mariadb | 2024-01-14 18:49:50 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) kafka | kafka.metrics.reporters = [] policy-db-migrator | -------------- policy-apex-pdp | client.rack = grafana | logger=migrator t=2024-01-14T18:49:50.788722728Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=987.594µs policy-pap | ssl.keystore.key = null simulator | 2024-01-14 18:50:34,260 INFO 172.17.0.3 - - [14/Jan/2024:18:50:19 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:50 0 [Note] InnoDB: 128 rollback segments are active. kafka | leader.imbalance.check.interval.seconds = 300 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-14T18:49:50.792575572Z level=info msg="Executing migration" id="Remove unique index org_id_slug" policy-pap | ssl.keystore.location = null simulator | 2024-01-14 18:50:34,260 INFO --> HTTP/1.1 200 OK mariadb | 2024-01-14 18:49:50 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... kafka | leader.imbalance.per.broker.percentage = 10 policy-db-migrator | -------------- policy-apex-pdp | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-14T18:49:50.793465853Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=890.341µs policy-pap | ssl.keystore.password = null simulator | 2024-01-14 18:50:34,261 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator mariadb | 2024-01-14 18:49:50 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT policy-db-migrator | policy-apex-pdp | enable.auto.commit = true grafana | logger=migrator t=2024-01-14T18:49:50.799049967Z level=info msg="Executing migration" id="Update dashboard title length" policy-pap | ssl.keystore.type = JKS simulator | 2024-01-14 18:50:34,262 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:50 0 [Note] InnoDB: log sequence number 340437; transaction id 299 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 policy-db-migrator | policy-apex-pdp | exclude.internal.topics = true grafana | logger=migrator t=2024-01-14T18:49:50.799164111Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=115.184µs policy-pap | ssl.protocol = TLSv1.3 simulator | 2024-01-14 18:50:34,262 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... mariadb | 2024-01-14 18:49:50 0 [Note] Plugin 'FEEDBACK' is disabled. kafka | log.cleaner.backoff.ms = 15000 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-apex-pdp | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-01-14T18:49:50.803672958Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" policy-pap | ssl.provider = null simulator | 2024-01-14 18:50:49,261 INFO 172.17.0.3 - - [14/Jan/2024:18:50:34 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:50 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool kafka | log.cleaner.dedupe.buffer.size = 134217728 policy-db-migrator | -------------- policy-apex-pdp | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-14T18:49:50.804787176Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.113648ms policy-pap | ssl.secure.random.implementation = null simulator | 2024-01-14 18:50:49,261 INFO --> HTTP/1.1 200 OK mariadb | 2024-01-14 18:49:50 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. kafka | log.cleaner.delete.retention.ms = 86400000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-14T18:49:50.808870319Z level=info msg="Executing migration" id="create dashboard_provisioning" policy-pap | ssl.trustmanager.algorithm = PKIX simulator | 2024-01-14 18:50:49,261 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator mariadb | 2024-01-14 18:49:50 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. kafka | log.cleaner.enable = true policy-db-migrator | -------------- policy-apex-pdp | group.id = 4f5099d3-3717-42bb-ba40-fb39c13c7c61 grafana | logger=migrator t=2024-01-14T18:49:50.810244636Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.374098ms policy-pap | ssl.truststore.certificates = null simulator | 2024-01-14 18:50:49,262 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:50 0 [Note] Server socket created on IP: '0.0.0.0'. kafka | log.cleaner.io.buffer.load.factor = 0.9 policy-db-migrator | policy-apex-pdp | group.instance.id = null grafana | logger=migrator t=2024-01-14T18:49:50.815398086Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" policy-pap | ssl.truststore.location = null simulator | 2024-01-14 18:50:49,263 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... mariadb | 2024-01-14 18:49:50 0 [Note] Server socket created on IP: '::'. kafka | log.cleaner.io.buffer.size = 524288 policy-db-migrator | policy-apex-pdp | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-14T18:49:50.825911832Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=10.515596ms policy-pap | ssl.truststore.password = null simulator | 2024-01-14 18:50:49,267 INFO 172.17.0.3 - - [14/Jan/2024:18:50:34 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:50 0 [Note] mariadbd: ready for connections. kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-apex-pdp | interceptor.classes = [] grafana | logger=migrator t=2024-01-14T18:49:50.836220261Z level=info msg="Executing migration" id="create dashboard_provisioning v2" policy-pap | ssl.truststore.type = JKS simulator | 2024-01-14 18:50:49,267 INFO --> HTTP/1.1 200 OK mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 policy-db-migrator | -------------- policy-apex-pdp | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-14T18:49:50.837513716Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.293565ms policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer simulator | 2024-01-14 18:50:49,267 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator mariadb | 2024-01-14 18:49:50 0 [Note] InnoDB: Buffer pool(s) load completed at 240114 18:49:50 kafka | log.cleaner.min.cleanable.ratio = 0.5 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-14T18:49:50.841724962Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" policy-pap | simulator | 2024-01-14 18:50:49,268 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. mariadb | 2024-01-14 18:49:50 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) kafka | log.cleaner.min.compaction.lag.ms = 0 policy-db-migrator | -------------- policy-apex-pdp | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-14T18:49:50.843206494Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.481422ms policy-pap | [2024-01-14T18:50:25.097+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 simulator | 2024-01-14 18:50:49,268 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... mariadb | 2024-01-14 18:49:50 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) kafka | log.cleaner.threads = 1 policy-db-migrator | policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-14T18:49:50.847381509Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" policy-pap | [2024-01-14T18:50:25.097+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a simulator | 2024-01-14 18:51:04,268 INFO 172.17.0.3 - - [14/Jan/2024:18:50:49 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" mariadb | 2024-01-14 18:49:52 76 [Warning] Aborted connection 76 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) kafka | log.cleanup.policy = [delete] policy-db-migrator | policy-apex-pdp | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-14T18:49:50.849089598Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.712059ms policy-pap | [2024-01-14T18:50:25.097+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258225096 simulator | 2024-01-14 18:51:04,268 INFO --> HTTP/1.1 200 OK mariadb | 2024-01-14 18:49:53 120 [Warning] Aborted connection 120 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) kafka | log.dir = /tmp/kafka-logs policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-apex-pdp | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:50.854168155Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" policy-pap | [2024-01-14T18:50:25.099+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-1, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Subscribed to topic(s): policy-pdp-pap simulator | 2024-01-14 18:51:04,270 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator kafka | log.dirs = /var/lib/kafka/data policy-db-migrator | -------------- policy-apex-pdp | max.poll.records = 500 grafana | logger=migrator t=2024-01-14T18:49:50.854576789Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=408.654µs policy-pap | [2024-01-14T18:50:25.100+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: simulator | 2024-01-14 18:51:04,271 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. kafka | log.flush.interval.messages = 9223372036854775807 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-apex-pdp | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:50.859951637Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" policy-pap | allow.auto.create.topics = true simulator | 2024-01-14 18:51:04,271 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... kafka | log.flush.interval.ms = null policy-db-migrator | -------------- policy-apex-pdp | metric.reporters = [] grafana | logger=migrator t=2024-01-14T18:49:50.861357245Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.404879ms policy-pap | auto.commit.interval.ms = 5000 simulator | 2024-01-14 18:51:04,273 INFO 172.17.0.3 - - [14/Jan/2024:18:50:49 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" kafka | log.flush.offset.checkpoint.interval.ms = 60000 policy-db-migrator | policy-apex-pdp | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-14T18:49:50.86550051Z level=info msg="Executing migration" id="Add check_sum column" policy-pap | auto.include.jmx.reporter = true simulator | 2024-01-14 18:51:04,274 INFO --> HTTP/1.1 200 OK kafka | log.flush.scheduler.interval.ms = 9223372036854775807 policy-db-migrator | policy-apex-pdp | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-14T18:49:50.868971791Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.47162ms policy-pap | auto.offset.reset = latest simulator | 2024-01-14 18:51:04,274 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-apex-pdp | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:50.875739906Z level=info msg="Executing migration" id="Add index for dashboard_title" policy-pap | bootstrap.servers = [kafka:9092] simulator | 2024-01-14 18:51:04,275 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. kafka | log.index.interval.bytes = 4096 policy-db-migrator | -------------- policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-01-14T18:49:50.876663878Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=924.262µs policy-pap | check.crcs = true simulator | 2024-01-14 18:51:04,275 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... kafka | log.index.size.max.bytes = 10485760 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-14T18:49:50.883566349Z level=info msg="Executing migration" id="delete tags for deleted dashboards" policy-pap | client.dns.lookup = use_all_dns_ips simulator | 2024-01-14 18:51:19,277 INFO --> HTTP/1.1 200 OK kafka | log.message.downconversion.enable = true policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-14T18:49:50.883868029Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=301.801µs policy-pap | client.id = consumer-policy-pap-2 simulator | 2024-01-14 18:51:19,277 INFO 172.17.0.3 - - [14/Jan/2024:18:51:04 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" kafka | log.message.format.version = 3.0-IV1 policy-db-migrator | policy-apex-pdp | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-14T18:49:50.888730228Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-pap | client.rack = simulator | 2024-01-14 18:51:19,277 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 policy-db-migrator | policy-apex-pdp | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:50.889198485Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=473.366µs policy-pap | connections.max.idle.ms = 540000 simulator | 2024-01-14 18:51:19,278 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. kafka | log.message.timestamp.type = CreateTime policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-apex-pdp | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:50.89424427Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-pap | default.api.timeout.ms = 60000 simulator | 2024-01-14 18:51:19,278 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... kafka | log.preallocate = false policy-db-migrator | -------------- policy-apex-pdp | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:50.89567408Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.43202ms policy-pap | enable.auto.commit = true simulator | 2024-01-14 18:51:19,280 INFO 172.17.0.3 - - [14/Jan/2024:18:51:04 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" kafka | log.retention.bytes = -1 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | sasl.jaas.config = null grafana | logger=migrator t=2024-01-14T18:49:50.899480222Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-pap | exclude.internal.topics = true simulator | 2024-01-14 18:51:19,280 INFO --> HTTP/1.1 200 OK kafka | log.retention.check.interval.ms = 300000 policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-14T18:49:50.901767812Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.28721ms policy-pap | fetch.max.bytes = 52428800 simulator | 2024-01-14 18:51:19,280 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator kafka | log.retention.hours = 168 policy-db-migrator | policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-14T18:49:50.905170181Z level=info msg="Executing migration" id="create data_source table" policy-pap | fetch.max.wait.ms = 500 simulator | 2024-01-14 18:51:19,281 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. kafka | log.retention.minutes = null policy-apex-pdp | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-14T18:49:50.906061452Z level=info msg="Migration successfully executed" id="create data_source table" duration=891.301µs policy-pap | fetch.min.bytes = 1 policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql kafka | log.retention.ms = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-01-14T18:49:50.911093197Z level=info msg="Executing migration" id="add index data_source.account_id" policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) kafka | log.roll.hours = 168 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-14T18:49:50.912694222Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.589175ms policy-db-migrator | -------------- policy-db-migrator | kafka | log.roll.jitter.hours = 0 policy-apex-pdp | sasl.login.callback.handler.class = null policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-14T18:49:50.916679371Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql kafka | log.roll.jitter.ms = null policy-apex-pdp | sasl.login.class = null policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-14T18:49:50.918144852Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.465151ms policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) kafka | log.roll.ms = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-14T18:49:50.92325273Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" policy-db-migrator | -------------- policy-db-migrator | kafka | log.segment.bytes = 1073741824 policy-apex-pdp | sasl.login.read.timeout.ms = null policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-14T18:49:50.92410813Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=849.639µs policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql kafka | log.segment.delete.delay.ms = 60000 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-14T18:49:50.928677859Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | max.connection.creation.rate = 2147483647 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-14T18:49:50.929687924Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.009585ms policy-db-migrator | -------------- policy-db-migrator | kafka | max.connections = 2147483647 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-14T18:49:50.936377267Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" simulator | 2024-01-14 18:51:19,281 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... policy-db-migrator | kafka | max.connections.per.ip = 2147483647 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:50.94508271Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.699012ms simulator | 2024-01-14 18:51:34,284 INFO 172.17.0.3 - - [14/Jan/2024:18:51:19 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | max.connections.per.ip.overrides = policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-14T18:49:50.949198563Z level=info msg="Executing migration" id="create data_source table v2" simulator | 2024-01-14 18:51:34,285 INFO 172.17.0.3 - - [14/Jan/2024:18:51:19 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | -------------- kafka | max.incremental.fetch.session.cache.slots = 1000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:50.949868456Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=669.813µs simulator | 2024-01-14 18:51:34,284 INFO --> HTTP/1.1 200 OK policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | message.max.bytes = 1048588 policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-14T18:49:50.953898227Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" simulator | 2024-01-14 18:51:34,286 INFO --> HTTP/1.1 200 OK policy-db-migrator | -------------- kafka | metadata.log.dir = null policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-14T18:49:50.954806998Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=909.301µs simulator | 2024-01-14 18:51:34,286 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator policy-db-migrator | kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-14T18:49:50.958570759Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" simulator | 2024-01-14 18:51:34,286 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator policy-db-migrator | kafka | metadata.log.max.snapshot.interval.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:50.959534033Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=962.414µs simulator | 2024-01-14 18:51:34,287 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql kafka | metadata.log.segment.bytes = 1073741824 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-01-14T18:49:50.963571913Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" simulator | 2024-01-14 18:51:34,287 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... policy-db-migrator | -------------- kafka | metadata.log.segment.min.bytes = 8388608 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-14T18:49:50.964190505Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=618.312µs simulator | 2024-01-14 18:51:34,288 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | metadata.log.segment.ms = 604800000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-14T18:49:50.972511665Z level=info msg="Executing migration" id="Add column with_credentials" simulator | 2024-01-14 18:51:34,288 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... policy-db-migrator | -------------- kafka | metadata.max.idle.interval.ms = 500 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-14T18:49:50.976727061Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.227657ms simulator | 2024-01-14 18:51:49,293 INFO 172.17.0.3 - - [14/Jan/2024:18:51:34 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope kafka | metadata.max.retention.bytes = 104857600 policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:50.981106184Z level=info msg="Executing migration" id="Add secure json data column" simulator | 2024-01-14 18:51:49,293 INFO --> HTTP/1.1 200 OK policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub kafka | metadata.max.retention.ms = 604800000 policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:50.983519868Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.413924ms simulator | 2024-01-14 18:51:49,294 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null kafka | metric.reporters = [] policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:50.987747255Z level=info msg="Executing migration" id="Update data_source table charset" simulator | 2024-01-14 18:51:49,295 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. policy-db-migrator | -------------- policy-apex-pdp | security.protocol = PLAINTEXT kafka | metrics.num.samples = 2 policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-14T18:49:50.987838398Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=91.993µs simulator | 2024-01-14 18:51:49,295 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | security.providers = null kafka | metrics.recording.level = INFO policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-14T18:49:50.992260432Z level=info msg="Executing migration" id="Update initial version to 1" simulator | 2024-01-14 18:51:49,295 INFO 172.17.0.3 - - [14/Jan/2024:18:51:34 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | -------------- policy-apex-pdp | send.buffer.bytes = 131072 kafka | metrics.sample.window.ms = 30000 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-14T18:49:50.992552812Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=292.1µs simulator | 2024-01-14 18:51:49,295 INFO --> HTTP/1.1 200 OK policy-db-migrator | policy-apex-pdp | session.timeout.ms = 45000 kafka | min.insync.replicas = 1 policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-14T18:49:50.996021233Z level=info msg="Executing migration" id="Add read_only data column" simulator | 2024-01-14 18:51:49,296 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator policy-db-migrator | policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 kafka | node.id = 1 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-14T18:49:50.998616493Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.59486ms simulator | 2024-01-14 18:51:49,296 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 kafka | num.io.threads = 8 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-14T18:49:51.001905188Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" simulator | 2024-01-14 18:51:49,297 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... policy-db-migrator | -------------- policy-apex-pdp | ssl.cipher.suites = null kafka | num.network.threads = 3 policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:51.00225938Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=353.242µs simulator | 2024-01-14 18:52:04,303 INFO 172.17.0.3 - - [14/Jan/2024:18:51:49 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | num.partitions = 1 policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-14T18:49:51.005726301Z level=info msg="Executing migration" id="Update json_data with nulls" simulator | 2024-01-14 18:52:04,303 INFO --> HTTP/1.1 200 OK policy-db-migrator | -------------- policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | num.recovery.threads.per.data.dir = 1 policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:51.006193567Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=466.987µs simulator | 2024-01-14 18:52:04,303 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator policy-db-migrator | policy-apex-pdp | ssl.engine.factory.class = null kafka | num.replica.alter.log.dirs.threads = null policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:51.012945131Z level=info msg="Executing migration" id="Add uid column" simulator | 2024-01-14 18:52:04,304 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. policy-db-migrator | policy-apex-pdp | ssl.key.password = null kafka | num.replica.fetchers = 1 policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-14T18:49:51.017337973Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.393932ms simulator | 2024-01-14 18:52:04,304 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | offset.metadata.max.bytes = 4096 policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-14T18:49:51.021960694Z level=info msg="Executing migration" id="Update uid value" simulator | 2024-01-14 18:52:04,305 INFO --> HTTP/1.1 200 OK policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | offsets.commit.required.acks = -1 policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-14T18:49:51.022263704Z level=info msg="Migration successfully executed" id="Update uid value" duration=302.67µs simulator | 2024-01-14 18:52:04,305 INFO 172.17.0.3 - - [14/Jan/2024:18:51:49 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-apex-pdp | ssl.keystore.key = null kafka | offsets.commit.timeout.ms = 5000 policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-14T18:49:51.025865759Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" simulator | 2024-01-14 18:52:04,305 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.location = null kafka | offsets.load.buffer.size = 5242880 policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-14T18:49:51.026822922Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=956.763µs simulator | 2024-01-14 18:52:04,306 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. policy-db-migrator | policy-apex-pdp | ssl.keystore.password = null kafka | offsets.retention.check.interval.ms = 600000 policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:51.033488484Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" simulator | 2024-01-14 18:52:04,306 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... policy-db-migrator | policy-apex-pdp | ssl.keystore.type = JKS kafka | offsets.retention.minutes = 10080 policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-14T18:49:51.034413396Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=924.732µs simulator | 2024-01-14 18:52:19,312 INFO 172.17.0.3 - - [14/Jan/2024:18:52:04 +0000] "GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | offsets.topic.compression.codec = 0 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-14T18:49:51.039320696Z level=info msg="Executing migration" id="create api_key table" simulator | 2024-01-14 18:52:19,312 INFO --> HTTP/1.1 200 OK policy-db-migrator | -------------- policy-apex-pdp | ssl.provider = null kafka | offsets.topic.num.partitions = 50 policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-14T18:49:51.040186296Z level=info msg="Migration successfully executed" id="create api_key table" duration=864.95µs simulator | 2024-01-14 18:52:19,312 INFO UEB GET /events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | ssl.secure.random.implementation = null kafka | offsets.topic.replication.factor = 1 policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-14T18:49:51.043912116Z level=info msg="Executing migration" id="add index api_key.account_id" simulator | 2024-01-14 18:52:19,313 INFO --> HTTP/1.1 200 OK policy-db-migrator | -------------- policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | offsets.topic.segment.bytes = 104857600 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-14T18:49:51.044830168Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=917.981µs simulator | 2024-01-14 18:52:19,313 INFO UEB GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator policy-db-migrator | policy-apex-pdp | ssl.truststore.certificates = null kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-14T18:49:51.053804269Z level=info msg="Executing migration" id="add index api_key.key" simulator | 2024-01-14 18:52:19,313 INFO 172.17.0.3 - - [14/Jan/2024:18:52:04 +0000] "GET /events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator HTTP/1.1" 200 2 "-" "Apache-HttpClient/4.5.14 (Java/17.0.9)" policy-db-migrator | policy-apex-pdp | ssl.truststore.location = null kafka | password.encoder.iterations = 4096 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:51.055793558Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.977228ms simulator | 2024-01-14 18:52:19,313 WARN GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator will send credentials over a clear channel. policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-apex-pdp | ssl.truststore.password = null kafka | password.encoder.key.length = 128 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-14T18:49:51.06564112Z level=info msg="Executing migration" id="add index api_key.account_id_name" simulator | 2024-01-14 18:52:19,314 WARN GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator will send credentials over a clear channel. policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.type = JKS kafka | password.encoder.keyfactory.algorithm = null policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-14T18:49:51.067110141Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.468501ms simulator | 2024-01-14 18:52:19,314 INFO GET http://simulator:3904/events/appc-lcm-read/024f82f7-ffad-4d64-9c13-138cc02c72df/simulator (as some-key) ... policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | password.encoder.old.secret = null policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-14T18:49:51.074391553Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" simulator | 2024-01-14 18:52:19,314 INFO GET http://simulator:3904/events/appc-cl/dc2fd0fa-84d1-496b-9c06-94d2f5ebfcce/simulator (as some-key) ... policy-db-migrator | -------------- policy-apex-pdp | kafka | password.encoder.secret = null policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-14T18:49:51.07602307Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.634157ms policy-db-migrator | policy-apex-pdp | [2024-01-14T18:50:28.366+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-14T18:49:51.083151928Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-db-migrator | policy-apex-pdp | [2024-01-14T18:50:28.366+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | process.roles = [] policy-pap | security.providers = null grafana | logger=migrator t=2024-01-14T18:49:51.084434042Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.281814ms policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-apex-pdp | [2024-01-14T18:50:28.366+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258228366 policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-14T18:49:51.09101116Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" kafka | producer.id.expiration.check.interval.ms = 600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-14T18:50:28.367+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Subscribed to topic(s): policy-pdp-pap kafka | producer.id.expiration.ms = 86400000 grafana | logger=migrator t=2024-01-14T18:49:51.092230413Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.219413ms policy-apex-pdp | [2024-01-14T18:50:28.367+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bd0fb875-ff67-49e5-9a0b-f9af774956e2, alive=false, publisher=null]]: starting policy-db-migrator | -------------- policy-pap | session.timeout.ms = 45000 kafka | producer.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-01-14T18:49:51.098004053Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" policy-db-migrator | policy-apex-pdp | [2024-01-14T18:50:28.378+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | queued.max.request.bytes = -1 grafana | logger=migrator t=2024-01-14T18:49:51.110528488Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=12.528245ms policy-db-migrator | policy-apex-pdp | acks = -1 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | queued.max.requests = 500 grafana | logger=migrator t=2024-01-14T18:49:51.121318962Z level=info msg="Executing migration" id="create api_key table v2" policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-apex-pdp | auto.include.jmx.reporter = true policy-pap | ssl.cipher.suites = null kafka | quota.window.num = 11 grafana | logger=migrator t=2024-01-14T18:49:51.1221294Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=810.278µs policy-db-migrator | -------------- policy-apex-pdp | batch.size = 16384 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | quota.window.size.seconds = 1 grafana | logger=migrator t=2024-01-14T18:49:51.127984844Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-pap | ssl.endpoint.identification.algorithm = https kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 grafana | logger=migrator t=2024-01-14T18:49:51.128780772Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=799.757µs policy-db-migrator | -------------- policy-apex-pdp | buffer.memory = 33554432 policy-pap | ssl.engine.factory.class = null kafka | remote.log.manager.task.interval.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:51.137006647Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-db-migrator | policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-pap | ssl.key.password = null kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:51.138279731Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.273284ms policy-db-migrator | policy-apex-pdp | client.id = producer-1 policy-pap | ssl.keymanager.algorithm = SunX509 kafka | remote.log.manager.task.retry.backoff.ms = 500 grafana | logger=migrator t=2024-01-14T18:49:51.143126139Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-apex-pdp | compression.type = none policy-pap | ssl.keystore.certificate.chain = null kafka | remote.log.manager.task.retry.jitter = 0.2 grafana | logger=migrator t=2024-01-14T18:49:51.144555099Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.42804ms policy-db-migrator | -------------- policy-apex-pdp | connections.max.idle.ms = 540000 policy-pap | ssl.keystore.key = null kafka | remote.log.manager.thread.pool.size = 10 grafana | logger=migrator t=2024-01-14T18:49:51.148463875Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-apex-pdp | delivery.timeout.ms = 120000 policy-pap | ssl.keystore.location = null kafka | remote.log.metadata.manager.class.name = null grafana | logger=migrator t=2024-01-14T18:49:51.149029734Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=562.129µs policy-db-migrator | -------------- policy-apex-pdp | enable.idempotence = true policy-pap | ssl.keystore.password = null kafka | remote.log.metadata.manager.class.path = null grafana | logger=migrator t=2024-01-14T18:49:51.153389206Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-db-migrator | policy-apex-pdp | interceptor.classes = [] policy-pap | ssl.keystore.type = JKS kafka | remote.log.metadata.manager.impl.prefix = null grafana | logger=migrator t=2024-01-14T18:49:51.154158162Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=769.786µs policy-db-migrator | policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | ssl.protocol = TLSv1.3 kafka | remote.log.metadata.manager.listener.name = null grafana | logger=migrator t=2024-01-14T18:49:51.160878775Z level=info msg="Executing migration" id="Update api_key table charset" policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-apex-pdp | linger.ms = 0 policy-pap | ssl.provider = null kafka | remote.log.reader.max.pending.tasks = 100 grafana | logger=migrator t=2024-01-14T18:49:51.160907596Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=29.941µs policy-db-migrator | -------------- policy-apex-pdp | max.block.ms = 60000 policy-pap | ssl.secure.random.implementation = null kafka | remote.log.reader.threads = 10 grafana | logger=migrator t=2024-01-14T18:49:51.164778471Z level=info msg="Executing migration" id="Add expires to api_key table" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-pap | ssl.trustmanager.algorithm = PKIX kafka | remote.log.storage.manager.class.name = null grafana | logger=migrator t=2024-01-14T18:49:51.16761481Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.836129ms policy-db-migrator | -------------- policy-apex-pdp | max.request.size = 1048576 policy-pap | ssl.truststore.certificates = null kafka | remote.log.storage.manager.class.path = null grafana | logger=migrator t=2024-01-14T18:49:51.17310399Z level=info msg="Executing migration" id="Add service account foreign key" policy-db-migrator | policy-apex-pdp | metadata.max.age.ms = 300000 policy-pap | ssl.truststore.location = null kafka | remote.log.storage.manager.impl.prefix = null grafana | logger=migrator t=2024-01-14T18:49:51.176307251Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.202941ms policy-db-migrator | policy-apex-pdp | metadata.max.idle.ms = 300000 policy-pap | ssl.truststore.password = null kafka | remote.log.storage.system.enable = false grafana | logger=migrator t=2024-01-14T18:49:51.181585714Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-db-migrator | > upgrade 0470-pdp.sql policy-apex-pdp | metric.reporters = [] policy-pap | ssl.truststore.type = JKS kafka | replica.fetch.backoff.ms = 1000 grafana | logger=migrator t=2024-01-14T18:49:51.18176519Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=180.496µs policy-db-migrator | -------------- policy-apex-pdp | metrics.num.samples = 2 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | replica.fetch.max.bytes = 1048576 grafana | logger=migrator t=2024-01-14T18:49:51.190296467Z level=info msg="Executing migration" id="Add last_used_at to api_key table" policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | metrics.recording.level = INFO policy-pap | kafka | replica.fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-14T18:49:51.19556577Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=5.268472ms policy-db-migrator | -------------- policy-apex-pdp | metrics.sample.window.ms = 30000 policy-pap | [2024-01-14T18:50:25.105+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | replica.fetch.response.max.bytes = 10485760 grafana | logger=migrator t=2024-01-14T18:49:51.199279078Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" policy-db-migrator | policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-pap | [2024-01-14T18:50:25.105+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | replica.fetch.wait.max.ms = 500 grafana | logger=migrator t=2024-01-14T18:49:51.20221563Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.936012ms policy-db-migrator | policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-pap | [2024-01-14T18:50:25.105+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258225105 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 grafana | logger=migrator t=2024-01-14T18:49:51.209311737Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-apex-pdp | partitioner.class = null policy-pap | [2024-01-14T18:50:25.106+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | replica.lag.time.max.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:51.210133365Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=822.438µs policy-db-migrator | -------------- policy-apex-pdp | partitioner.ignore.keys = false kafka | replica.selector.class = null grafana | logger=migrator t=2024-01-14T18:49:51.214265999Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" policy-pap | [2024-01-14T18:50:25.399+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-apex-pdp | receive.buffer.bytes = 32768 kafka | replica.socket.receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-14T18:49:51.2148656Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=600.201µs policy-pap | [2024-01-14T18:50:25.554+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-db-migrator | -------------- policy-apex-pdp | reconnect.backoff.max.ms = 1000 kafka | replica.socket.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:51.218102462Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" policy-pap | [2024-01-14T18:50:25.762+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@19f99aaf, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5020e5ab, org.springframework.security.web.context.SecurityContextHolderFilter@abf1816, org.springframework.security.web.header.HeaderWriterFilter@73baf7f0, org.springframework.security.web.authentication.logout.LogoutFilter@314c28dc, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@c7c07ff, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@7bd1098, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@6a562255, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@19f1f330, org.springframework.security.web.access.ExceptionTranslationFilter@640a6d4b, org.springframework.security.web.access.intercept.AuthorizationFilter@2ffcdc9b] policy-db-migrator | policy-apex-pdp | reconnect.backoff.ms = 50 kafka | replication.quota.window.num = 11 grafana | logger=migrator t=2024-01-14T18:49:51.218896109Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=784.477µs policy-pap | [2024-01-14T18:50:26.567+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-db-migrator | policy-apex-pdp | request.timeout.ms = 30000 kafka | replication.quota.window.size.seconds = 1 grafana | logger=migrator t=2024-01-14T18:49:51.224851036Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" policy-pap | [2024-01-14T18:50:26.623+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-apex-pdp | retries = 2147483647 kafka | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:51.225983935Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.133219ms policy-pap | [2024-01-14T18:50:26.647+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-db-migrator | -------------- policy-apex-pdp | retry.backoff.ms = 100 kafka | reserved.broker.max.id = 1000 grafana | logger=migrator t=2024-01-14T18:49:51.233071422Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" policy-pap | [2024-01-14T18:50:26.662+00:00|INFO|ServiceManager|main] Policy PAP starting policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | sasl.client.callback.handler.class = null kafka | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:51.234409038Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.338487ms policy-pap | [2024-01-14T18:50:26.663+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-db-migrator | -------------- policy-apex-pdp | sasl.jaas.config = null kafka | sasl.enabled.mechanisms = [GSSAPI] grafana | logger=migrator t=2024-01-14T18:49:51.257413317Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" policy-pap | [2024-01-14T18:50:26.663+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-db-migrator | policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.jaas.config = null grafana | logger=migrator t=2024-01-14T18:49:51.258812275Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.400609ms policy-pap | [2024-01-14T18:50:26.664+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-db-migrator | policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-14T18:49:51.268887215Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" policy-pap | [2024-01-14T18:50:26.664+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-apex-pdp | sasl.kerberos.service.name = null kafka | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-14T18:49:51.269092222Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=206.827µs policy-pap | [2024-01-14T18:50:26.664+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-db-migrator | -------------- policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] grafana | logger=migrator t=2024-01-14T18:49:51.275912089Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" policy-pap | [2024-01-14T18:50:26.664+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-14T18:49:51.276034033Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=119.504µs policy-pap | [2024-01-14T18:50:26.669+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9f04366a-9b2f-4312-96e1-33019febbf8b, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@24558113 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.callback.handler.class = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-14T18:49:51.280027241Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" policy-pap | [2024-01-14T18:50:26.680+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9f04366a-9b2f-4312-96e1-33019febbf8b, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | policy-apex-pdp | sasl.login.class = null kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-14T18:49:51.283276294Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.246423ms policy-pap | [2024-01-14T18:50:26.680+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-db-migrator | policy-apex-pdp | sasl.login.connect.timeout.ms = null kafka | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:51.287429879Z level=info msg="Executing migration" id="Add encrypted dashboard json column" policy-pap | allow.auto.create.topics = true policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-apex-pdp | sasl.login.read.timeout.ms = null kafka | sasl.login.class = null grafana | logger=migrator t=2024-01-14T18:49:51.290414242Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.984044ms policy-pap | auto.commit.interval.ms = 5000 policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:51.304736039Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:51.304956597Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=222.998µs policy-pap | auto.offset.reset = latest policy-db-migrator | -------------- policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-14T18:49:51.311451302Z level=info msg="Executing migration" id="create quota table v1" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-14T18:49:51.312561811Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.103089ms policy-pap | check.crcs = true policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-14T18:49:51.316395874Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | kafka | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-pap | client.id = consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql grafana | logger=migrator t=2024-01-14T18:49:51.317357117Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=962.803µs kafka | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.mechanism = GSSAPI policy-pap | client.rack = policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.323878494Z level=info msg="Executing migration" id="Update quota table charset" kafka | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-14T18:49:51.324025889Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=148.996µs kafka | sasl.login.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-14T18:49:51.332456932Z level=info msg="Executing migration" id="create plugin_setting table" kafka | sasl.mechanism.controller.protocol = GSSAPI policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-01-14T18:49:51.333319902Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=863.16µs kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-01-14T18:49:51.345149812Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-01-14T18:49:51.346011062Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=863.19µs kafka | sasl.oauthbearer.expected.audience = null policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-14T18:49:51.351447921Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" kafka | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-14T18:49:51.363327373Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=11.874382ms kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-pap | group.id = 9f04366a-9b2f-4312-96e1-33019febbf8b grafana | logger=migrator t=2024-01-14T18:49:51.371381933Z level=info msg="Executing migration" id="Update plugin_setting table charset" kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-14T18:49:51.371422874Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=42.802µs kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-14T18:49:51.375866548Z level=info msg="Executing migration" id="create session table" kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | policy-apex-pdp | security.protocol = PLAINTEXT policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-14T18:49:51.376650006Z level=info msg="Migration successfully executed" id="create session table" duration=783.637µs kafka | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-apex-pdp | security.providers = null policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-14T18:49:51.385939198Z level=info msg="Executing migration" id="Drop old table playlist table" kafka | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- policy-apex-pdp | send.buffer.bytes = 131072 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-14T18:49:51.386030551Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=91.963µs kafka | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-pap | isolation.level = read_uncommitted grafana | logger=migrator t=2024-01-14T18:49:51.394858438Z level=info msg="Executing migration" id="Drop old table playlist_item table" kafka | sasl.server.callback.handler.class = null policy-db-migrator | -------------- policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-14T18:49:51.394984852Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=126.934µs kafka | sasl.server.max.receive.size = 524288 policy-db-migrator | policy-apex-pdp | ssl.cipher.suites = null policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-14T18:49:51.402276815Z level=info msg="Executing migration" id="create playlist table v2" kafka | security.inter.broker.protocol = PLAINTEXT policy-db-migrator | policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:51.404033286Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.753261ms kafka | security.providers = null policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-14T18:49:51.40962731Z level=info msg="Executing migration" id="create playlist item table v2" kafka | server.max.startup.time.ms = 9223372036854775807 policy-db-migrator | -------------- policy-apex-pdp | ssl.engine.factory.class = null policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:51.410234781Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=607.891µs kafka | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-apex-pdp | ssl.key.password = null policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-14T18:49:51.419777113Z level=info msg="Executing migration" id="Update playlist table charset" kafka | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | -------------- policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-14T18:49:51.419819504Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=44.732µs kafka | socket.listen.backlog.size = 50 policy-db-migrator | policy-apex-pdp | ssl.keystore.certificate.chain = null policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-14T18:49:51.425347426Z level=info msg="Executing migration" id="Update playlist_item table charset" kafka | socket.receive.buffer.bytes = 102400 policy-db-migrator | policy-apex-pdp | ssl.keystore.key = null policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:51.425393417Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=49.661µs kafka | socket.request.max.bytes = 104857600 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-apex-pdp | ssl.keystore.location = null policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] grafana | logger=migrator t=2024-01-14T18:49:51.43124226Z level=info msg="Executing migration" id="Add playlist column created_at" kafka | socket.send.buffer.bytes = 102400 policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.password = null policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-14T18:49:51.433639663Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.395933ms kafka | ssl.cipher.suites = [] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | ssl.keystore.type = JKS policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-14T18:49:51.438987549Z level=info msg="Executing migration" id="Add playlist column updated_at" kafka | ssl.client.auth = none policy-db-migrator | -------------- policy-apex-pdp | ssl.protocol = TLSv1.3 policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-14T18:49:51.442302414Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.315115ms kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | policy-apex-pdp | ssl.provider = null policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:51.447305838Z level=info msg="Executing migration" id="drop preferences table v2" kafka | ssl.endpoint.identification.algorithm = https policy-db-migrator | policy-apex-pdp | ssl.secure.random.implementation = null policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:51.447401271Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=96.513µs kafka | ssl.engine.factory.class = null policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:51.452531359Z level=info msg="Executing migration" id="drop preferences table v3" kafka | ssl.key.password = null policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.certificates = null policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-14T18:49:51.452676784Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=146.255µs kafka | ssl.keymanager.algorithm = SunX509 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-apex-pdp | ssl.truststore.location = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-14T18:49:51.458491476Z level=info msg="Executing migration" id="create preferences table v3" kafka | ssl.keystore.certificate.chain = null policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.password = null policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-14T18:49:51.459700238Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.208332ms kafka | ssl.keystore.key = null policy-db-migrator | policy-apex-pdp | ssl.truststore.type = JKS policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-14T18:49:51.466293707Z level=info msg="Executing migration" id="Update preferences table charset" kafka | ssl.keystore.location = null policy-db-migrator | policy-apex-pdp | transaction.timeout.ms = 60000 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-14T18:49:51.466323458Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=31.081µs kafka | ssl.keystore.password = null policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-apex-pdp | transactional.id = null policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-14T18:49:51.472241553Z level=info msg="Executing migration" id="Add column team_id in preferences" kafka | ssl.keystore.type = JKS policy-db-migrator | -------------- policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:51.476082667Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.840784ms kafka | ssl.principal.mapping.rules = DEFAULT policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-apex-pdp | policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-14T18:49:51.482882593Z level=info msg="Executing migration" id="Update team_id column values in preferences" kafka | ssl.protocol = TLSv1.3 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-14T18:50:28.386+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:51.48307104Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=195.506µs kafka | ssl.provider = null policy-db-migrator | policy-apex-pdp | [2024-01-14T18:50:28.404+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | sasl.login.read.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:51.489291945Z level=info msg="Executing migration" id="Add column week_start in preferences" kafka | ssl.secure.random.implementation = null policy-db-migrator | policy-apex-pdp | [2024-01-14T18:50:28.404+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | sasl.login.refresh.buffer.seconds = 300 grafana | logger=migrator t=2024-01-14T18:49:51.494966692Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.673767ms kafka | ssl.trustmanager.algorithm = PKIX policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-apex-pdp | [2024-01-14T18:50:28.404+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258228404 policy-pap | sasl.login.refresh.min.period.seconds = 60 grafana | logger=migrator t=2024-01-14T18:49:51.499643045Z level=info msg="Executing migration" id="Add column preferences.json_data" kafka | ssl.truststore.certificates = null policy-db-migrator | -------------- policy-apex-pdp | [2024-01-14T18:50:28.404+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=bd0fb875-ff67-49e5-9a0b-f9af774956e2, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | sasl.login.refresh.window.factor = 0.8 grafana | logger=migrator t=2024-01-14T18:49:51.503228229Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.584574ms kafka | ssl.truststore.location = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | [2024-01-14T18:50:28.405+00:00|INFO|ServiceManager|main] service manager starting set alive policy-pap | sasl.login.refresh.window.jitter = 0.05 grafana | logger=migrator t=2024-01-14T18:49:51.510061696Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS policy-apex-pdp | [2024-01-14T18:50:28.405+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-pap | sasl.login.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-14T18:49:51.510320985Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=260.919µs policy-db-migrator | -------------- kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-apex-pdp | [2024-01-14T18:50:28.407+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-pap | sasl.login.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:51.519106601Z level=info msg="Executing migration" id="Add preferences index org_id" policy-db-migrator | kafka | transaction.max.timeout.ms = 900000 policy-apex-pdp | [2024-01-14T18:50:28.407+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-14T18:49:51.520439297Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.337967ms policy-db-migrator | kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-apex-pdp | [2024-01-14T18:50:28.409+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-14T18:49:51.526680193Z level=info msg="Executing migration" id="Add preferences index user_id" policy-db-migrator | > upgrade 0600-toscanodetemplate.sql kafka | transaction.state.log.load.buffer.size = 5242880 policy-apex-pdp | [2024-01-14T18:50:28.409+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-14T18:49:51.527430819Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=751.826µs policy-db-migrator | -------------- kafka | transaction.state.log.min.isr = 2 policy-apex-pdp | [2024-01-14T18:50:28.409+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-14T18:49:51.533668356Z level=info msg="Executing migration" id="create alert table v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) kafka | transaction.state.log.num.partitions = 50 policy-apex-pdp | [2024-01-14T18:50:28.409+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4f5099d3-3717-42bb-ba40-fb39c13c7c61, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-14T18:49:51.534783435Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.133029ms policy-db-migrator | -------------- kafka | transaction.state.log.replication.factor = 3 policy-apex-pdp | [2024-01-14T18:50:28.410+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4f5099d3-3717-42bb-ba40-fb39c13c7c61, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | kafka | transaction.state.log.segment.bytes = 104857600 grafana | logger=migrator t=2024-01-14T18:49:51.541421695Z level=info msg="Executing migration" id="add index alert org_id & id " policy-apex-pdp | [2024-01-14T18:50:28.410+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | transactional.id.expiration.ms = 604800000 grafana | logger=migrator t=2024-01-14T18:49:51.542188742Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=767.537µs policy-apex-pdp | [2024-01-14T18:50:28.425+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0610-toscanodetemplates.sql kafka | unclean.leader.election.enable = false grafana | logger=migrator t=2024-01-14T18:49:51.545390163Z level=info msg="Executing migration" id="add index alert state" policy-apex-pdp | [] policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- kafka | unstable.api.versions.enable = false grafana | logger=migrator t=2024-01-14T18:49:51.545990364Z level=info msg="Migration successfully executed" id="add index alert state" duration=600.091µs policy-apex-pdp | [2024-01-14T18:50:28.428+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) kafka | zookeeper.clientCnxnSocket = null grafana | logger=migrator t=2024-01-14T18:49:51.553214974Z level=info msg="Executing migration" id="add index alert dashboard_id" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ddc84aa5-02b5-4167-b1c2-fcce08a87a27","timestampMs":1705258228409,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- kafka | zookeeper.connect = zookeeper:2181 grafana | logger=migrator t=2024-01-14T18:49:51.553825885Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=609.931µs policy-apex-pdp | [2024-01-14T18:50:28.555+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-pap | security.protocol = PLAINTEXT policy-db-migrator | kafka | zookeeper.connection.timeout.ms = null grafana | logger=migrator t=2024-01-14T18:49:51.564188825Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" policy-pap | security.providers = null policy-apex-pdp | [2024-01-14T18:50:28.555+00:00|INFO|ServiceManager|main] service manager starting policy-db-migrator | kafka | zookeeper.max.in.flight.requests = 10 grafana | logger=migrator t=2024-01-14T18:49:51.564635331Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=446.786µs policy-pap | send.buffer.bytes = 131072 policy-apex-pdp | [2024-01-14T18:50:28.555+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql kafka | zookeeper.metadata.migration.enable = false grafana | logger=migrator t=2024-01-14T18:49:51.569145207Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" policy-pap | session.timeout.ms = 45000 policy-apex-pdp | [2024-01-14T18:50:28.555+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | -------------- kafka | zookeeper.session.timeout.ms = 18000 grafana | logger=migrator t=2024-01-14T18:49:51.569781919Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=636.862µs policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | [2024-01-14T18:50:28.567+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | zookeeper.set.acl = false grafana | logger=migrator t=2024-01-14T18:49:51.572954319Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | [2024-01-14T18:50:28.568+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | -------------- kafka | zookeeper.ssl.cipher.suites = null grafana | logger=migrator t=2024-01-14T18:49:51.57355502Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=600.691µs policy-apex-pdp | [2024-01-14T18:50:28.568+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-db-migrator | kafka | zookeeper.ssl.client.enable = false grafana | logger=migrator t=2024-01-14T18:49:51.578463651Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" policy-pap | ssl.cipher.suites = null policy-apex-pdp | [2024-01-14T18:50:28.568+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-db-migrator | kafka | zookeeper.ssl.crl.enable = false grafana | logger=migrator t=2024-01-14T18:49:51.591801454Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.329882ms policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | [2024-01-14T18:50:28.721+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 0Gs_niWkQtyT_H8dS3neSw policy-db-migrator | > upgrade 0630-toscanodetype.sql kafka | zookeeper.ssl.enabled.protocols = null grafana | logger=migrator t=2024-01-14T18:49:51.601788121Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" policy-pap | ssl.endpoint.identification.algorithm = https policy-apex-pdp | [2024-01-14T18:50:28.721+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Cluster ID: 0Gs_niWkQtyT_H8dS3neSw policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.602535966Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=750.836µs policy-pap | ssl.engine.factory.class = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS policy-apex-pdp | [2024-01-14T18:50:28.722+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.614262543Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" policy-pap | ssl.key.password = null kafka | zookeeper.ssl.keystore.location = null policy-apex-pdp | [2024-01-14T18:50:28.722+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.615349181Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.085828ms policy-pap | ssl.keymanager.algorithm = SunX509 kafka | zookeeper.ssl.keystore.password = null policy-apex-pdp | [2024-01-14T18:50:28.727+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] (Re-)joining group policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.619954921Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" policy-pap | ssl.keystore.certificate.chain = null kafka | zookeeper.ssl.keystore.type = null policy-apex-pdp | [2024-01-14T18:50:28.741+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Request joining group due to: need to re-join with the given member-id: consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2-99824490-0a38-49b6-87ba-48ad4efe4059 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.620341465Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=386.993µs policy-pap | ssl.keystore.key = null kafka | zookeeper.ssl.ocsp.enable = false policy-apex-pdp | [2024-01-14T18:50:28.741+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | > upgrade 0640-toscanodetypes.sql grafana | logger=migrator t=2024-01-14T18:49:51.623738822Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" policy-pap | ssl.keystore.location = null kafka | zookeeper.ssl.protocol = TLSv1.2 policy-apex-pdp | [2024-01-14T18:50:28.741+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] (Re-)joining group policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.625088569Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.349287ms policy-pap | ssl.keystore.password = null kafka | zookeeper.ssl.truststore.location = null policy-apex-pdp | [2024-01-14T18:50:29.179+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.632950022Z level=info msg="Executing migration" id="create alert_notification table v1" policy-pap | ssl.keystore.type = JKS kafka | zookeeper.ssl.truststore.password = null policy-apex-pdp | [2024-01-14T18:50:29.180+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 kafka | zookeeper.ssl.truststore.type = null policy-apex-pdp | [2024-01-14T18:50:31.747+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2-99824490-0a38-49b6-87ba-48ad4efe4059', protocol='range'} grafana | logger=migrator t=2024-01-14T18:49:51.636393941Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=3.442939ms policy-db-migrator | policy-pap | ssl.provider = null kafka | (kafka.server.KafkaConfig) policy-apex-pdp | [2024-01-14T18:50:31.758+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Finished assignment for group at generation 1: {consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2-99824490-0a38-49b6-87ba-48ad4efe4059=Assignment(partitions=[policy-pdp-pap-0])} grafana | logger=migrator t=2024-01-14T18:49:51.641484108Z level=info msg="Executing migration" id="Add column is_default" policy-db-migrator | policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-14 18:49:58,522] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-01-14T18:50:31.768+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2-99824490-0a38-49b6-87ba-48ad4efe4059', protocol='range'} grafana | logger=migrator t=2024-01-14T18:49:51.647366663Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.897144ms policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-14 18:49:58,522] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-01-14T18:50:31.768+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) grafana | logger=migrator t=2024-01-14T18:49:51.651425943Z level=info msg="Executing migration" id="Add column frequency" policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null kafka | [2024-01-14 18:49:58,523] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-01-14T18:50:31.770+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Adding newly assigned partitions: policy-pdp-pap-0 grafana | logger=migrator t=2024-01-14T18:49:51.654931125Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.501292ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.truststore.location = null kafka | [2024-01-14 18:49:58,525] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-apex-pdp | [2024-01-14T18:50:31.777+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Found no committed offset for partition policy-pdp-pap-0 grafana | logger=migrator t=2024-01-14T18:49:51.664221708Z level=info msg="Executing migration" id="Add column send_reminder" policy-db-migrator | -------------- policy-pap | ssl.truststore.password = null kafka | [2024-01-14 18:49:58,554] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-apex-pdp | [2024-01-14T18:50:31.790+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2, groupId=4f5099d3-3717-42bb-ba40-fb39c13c7c61] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. grafana | logger=migrator t=2024-01-14T18:49:51.669585284Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.365557ms policy-db-migrator | policy-pap | ssl.truststore.type = JKS kafka | [2024-01-14 18:49:58,561] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-apex-pdp | [2024-01-14T18:50:48.410+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-14T18:49:51.678528414Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-db-migrator | policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-14 18:49:58,569] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:51.683185876Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.658832ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c64813f5-5d15-4f56-acff-79be224de4a1","timestampMs":1705258248410,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | kafka | [2024-01-14 18:49:58,571] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:51.689204165Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-apex-pdp | [2024-01-14T18:50:48.433+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:26.686+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-14T18:49:51.6904938Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.290214ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c64813f5-5d15-4f56-acff-79be224de4a1","timestampMs":1705258248410,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | [2024-01-14T18:50:26.687+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-14 18:49:58,572] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:51.69541259Z level=info msg="Executing migration" id="Update alert table charset" policy-apex-pdp | [2024-01-14T18:50:48.436+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:26.687+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258226686 kafka | [2024-01-14 18:49:58,581] INFO Starting the log cleaner (kafka.log.LogCleaner) grafana | logger=migrator t=2024-01-14T18:49:51.695452092Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=41.991µs policy-apex-pdp | [2024-01-14T18:50:48.613+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | [2024-01-14T18:50:26.687+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Subscribed to topic(s): policy-pdp-pap kafka | [2024-01-14 18:49:58,625] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) grafana | logger=migrator t=2024-01-14T18:49:51.701426329Z level=info msg="Executing migration" id="Update alert_notification table charset" policy-apex-pdp | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"757313d4-7af4-4931-9ac2-fd2d2926c923","timestampMs":1705258248548,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | [2024-01-14T18:50:26.687+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | [2024-01-14 18:49:58,639] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) grafana | logger=migrator t=2024-01-14T18:49:51.701502071Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=76.472µs policy-apex-pdp | [2024-01-14T18:50:48.625+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-pap | [2024-01-14T18:50:26.687+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=0937c149-2aa8-46a7-9e1a-1a04d55e7141, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@68e094c9 kafka | [2024-01-14 18:49:58,655] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-01-14T18:49:51.705322394Z level=info msg="Executing migration" id="create notification_journal table v1" policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ad808b82-ffd2-475d-8e29-60ac57d7e4a3","timestampMs":1705258248625,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:26.687+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=0937c149-2aa8-46a7-9e1a-1a04d55e7141, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-14 18:49:58,679] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-14T18:49:51.706095001Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=772.247µs policy-apex-pdp | [2024-01-14T18:50:48.625+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-pap | [2024-01-14T18:50:26.688+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-01-14 18:49:59,003] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-01-14T18:49:51.710300037Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-apex-pdp | [2024-01-14T18:50:48.635+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | allow.auto.create.topics = true kafka | [2024-01-14 18:49:59,027] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) grafana | logger=migrator t=2024-01-14T18:49:51.711454777Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.15396ms policy-db-migrator | -------------- policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"757313d4-7af4-4931-9ac2-fd2d2926c923","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3297215-5c2f-4719-8d17-679689175ab0","timestampMs":1705258248635,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-01-14 18:49:59,027] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) grafana | logger=migrator t=2024-01-14T18:49:51.716511563Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-db-migrator | policy-apex-pdp | [2024-01-14T18:50:48.642+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-14 18:49:59,032] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) grafana | logger=migrator t=2024-01-14T18:49:51.71760383Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.092188ms policy-db-migrator | policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ad808b82-ffd2-475d-8e29-60ac57d7e4a3","timestampMs":1705258248625,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} policy-pap | auto.offset.reset = latest kafka | [2024-01-14 18:49:59,036] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-14T18:49:51.722007763Z level=info msg="Executing migration" id="create alert_notification_state table v1" policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-apex-pdp | [2024-01-14T18:50:48.642+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-14 18:49:59,053] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-14T18:49:51.722891564Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=884.321µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-14T18:50:48.649+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | check.crcs = true kafka | [2024-01-14 18:49:59,055] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-14T18:49:51.72911234Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"757313d4-7af4-4931-9ac2-fd2d2926c923","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3297215-5c2f-4719-8d17-679689175ab0","timestampMs":1705258248635,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-14 18:49:59,057] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-14T18:49:51.72997198Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=859.73µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-14T18:50:48.650+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | client.id = consumer-policy-pap-4 kafka | [2024-01-14 18:49:59,059] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-14T18:49:51.739281743Z level=info msg="Executing migration" id="Add for to alert table" policy-db-migrator | policy-apex-pdp | [2024-01-14T18:50:48.680+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | client.rack = kafka | [2024-01-14 18:49:59,074] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) grafana | logger=migrator t=2024-01-14T18:49:51.747629973Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=8.390212ms policy-db-migrator | policy-apex-pdp | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","timestampMs":1705258248549,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-apex-pdp | [2024-01-14T18:50:48.682+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5864a826-fdd3-49c3-980a-ed15355aced0","timestampMs":1705258248682,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | exclude.internal.topics = true kafka | [2024-01-14 18:49:59,100] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) grafana | logger=migrator t=2024-01-14T18:49:51.751813938Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-apex-pdp | [2024-01-14T18:50:48.691+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | fetch.max.bytes = 52428800 policy-db-migrator | -------------- kafka | [2024-01-14 18:49:59,141] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705258199116,1705258199116,1,0,0,72057880846860289,258,0,27 grafana | logger=migrator t=2024-01-14T18:49:51.754737529Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.924661ms policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5864a826-fdd3-49c3-980a-ed15355aced0","timestampMs":1705258248682,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | fetch.max.wait.ms = 500 kafka | (kafka.zk.KafkaZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-apex-pdp | [2024-01-14T18:50:48.694+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS grafana | logger=migrator t=2024-01-14T18:49:51.75964117Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-pap | fetch.min.bytes = 1 kafka | [2024-01-14 18:49:59,142] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-db-migrator | -------------- policy-apex-pdp | [2024-01-14T18:50:48.720+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | group.id = policy-pap kafka | [2024-01-14 18:49:59,193] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.759910839Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=263.66µs policy-apex-pdp | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"020754a9-8ac4-4079-a8c5-daad36a9cd64","timestampMs":1705258248699,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | group.instance.id = null kafka | [2024-01-14 18:49:59,200] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.764512479Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-apex-pdp | [2024-01-14T18:50:48.723+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-14 18:49:59,205] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | > upgrade 0700-toscapolicytype.sql grafana | logger=migrator t=2024-01-14T18:49:51.765838435Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.319106ms policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"020754a9-8ac4-4079-a8c5-daad36a9cd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d0d1911d-ca26-4d08-90e7-7571bfeb1817","timestampMs":1705258248723,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | interceptor.classes = [] kafka | [2024-01-14 18:49:59,206] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.780154422Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-apex-pdp | [2024-01-14T18:50:48.729+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | internal.leave.group.on.close = true kafka | [2024-01-14 18:49:59,216] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.78126292Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.110418ms policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"020754a9-8ac4-4079-a8c5-daad36a9cd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d0d1911d-ca26-4d08-90e7-7571bfeb1817","timestampMs":1705258248723,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-01-14 18:49:59,218] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.784850185Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-apex-pdp | [2024-01-14T18:50:48.730+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-pap | isolation.level = read_uncommitted kafka | [2024-01-14 18:49:59,227] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.790134398Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.282433ms policy-apex-pdp | [2024-01-14T18:50:56.159+00:00|INFO|RequestLog|qtp830863979-29] 172.17.0.5 - policyadmin [14/Jan/2024:18:50:56 +0000] "GET /metrics HTTP/1.1" 200 10648 "-" "Prometheus/2.48.1" policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-14 18:49:59,228] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.795820286Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" policy-apex-pdp | [2024-01-14T18:51:56.078+00:00|INFO|RequestLog|qtp830863979-28] 172.17.0.5 - policyadmin [14/Jan/2024:18:51:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.48.1" policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-01-14 18:49:59,231] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0710-toscapolicytypes.sql grafana | logger=migrator t=2024-01-14T18:49:51.796015862Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=196.877µs policy-pap | max.poll.interval.ms = 300000 kafka | [2024-01-14 18:49:59,235] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.799790243Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" policy-pap | max.poll.records = 500 kafka | [2024-01-14 18:49:59,245] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.800943753Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.15276ms policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-14 18:49:59,250] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.805771371Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" policy-pap | metric.reporters = [] kafka | [2024-01-14 18:49:59,252] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.806642501Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=871.92µs policy-pap | metrics.num.samples = 2 kafka | [2024-01-14 18:49:59,267] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.811179989Z level=info msg="Executing migration" id="Drop old annotation table v4" policy-pap | metrics.recording.level = INFO kafka | [2024-01-14 18:49:59,267] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql grafana | logger=migrator t=2024-01-14T18:49:51.811329604Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=150.955µs policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-01-14 18:49:59,273] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.818390599Z level=info msg="Executing migration" id="create annotation table v5" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-01-14 18:49:59,283] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-14T18:49:51.819412934Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.023465ms policy-pap | receive.buffer.bytes = 65536 kafka | [2024-01-14 18:49:59,284] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.824910995Z level=info msg="Executing migration" id="add index annotation 0 v3" policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-01-14 18:49:59,288] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.826157779Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.247153ms policy-pap | reconnect.backoff.ms = 50 kafka | [2024-01-14 18:49:59,304] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.833194553Z level=info msg="Executing migration" id="add index annotation 1 v3" policy-pap | request.timeout.ms = 30000 kafka | [2024-01-14 18:49:59,308] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0730-toscaproperty.sql grafana | logger=migrator t=2024-01-14T18:49:51.834414585Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.221092ms policy-pap | retry.backoff.ms = 100 kafka | [2024-01-14 18:49:59,312] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.839609776Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-pap | sasl.client.callback.handler.class = null kafka | [2024-01-14 18:49:59,315] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-14T18:49:51.840514557Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=905.122µs policy-pap | sasl.jaas.config = null kafka | [2024-01-14 18:49:59,323] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.844406202Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | kafka | [2024-01-14 18:49:59,327] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-01-14T18:49:51.845425157Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.019145ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-14 18:49:59,327] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.850947799Z level=info msg="Executing migration" id="add index annotation 4 v3" kafka | [2024-01-14 18:49:59,327] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql grafana | logger=migrator t=2024-01-14T18:49:51.852054918Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.107669ms kafka | [2024-01-14 18:49:59,327] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.860912475Z level=info msg="Executing migration" id="Update annotation table charset" kafka | [2024-01-14 18:49:59,328] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.860969677Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=63.832µs kafka | [2024-01-14 18:49:59,328] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-pap | sasl.login.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.870357863Z level=info msg="Executing migration" id="Add column region_id to annotation table" kafka | [2024-01-14 18:49:59,331] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.874681233Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.32989ms kafka | [2024-01-14 18:49:59,331] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.878364481Z level=info msg="Executing migration" id="Drop category_id index" kafka | [2024-01-14 18:49:59,332] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql grafana | logger=migrator t=2024-01-14T18:49:51.879333894Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=969.993µs kafka | [2024-01-14 18:49:59,332] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.883657354Z level=info msg="Executing migration" id="Add column tags to annotation table" kafka | [2024-01-14 18:49:59,332] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.88783247Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.174915ms kafka | [2024-01-14 18:49:59,337] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.896244162Z level=info msg="Executing migration" id="Create annotation_tag table v2" kafka | [2024-01-14 18:49:59,346] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.896944806Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=704.055µs kafka | [2024-01-14 18:49:59,348] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.900288952Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" kafka | [2024-01-14 18:49:59,348] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-01-14T18:49:51.901223694Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=931.012µs kafka | [2024-01-14 18:49:59,348] INFO Kafka startTimeMs: 1705258199341 (org.apache.kafka.common.utils.AppInfoParser) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.904203228Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" kafka | [2024-01-14 18:49:59,350] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-14T18:49:51.904986585Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=783.327µs kafka | [2024-01-14 18:49:59,356] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.917599633Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" kafka | [2024-01-14 18:49:59,357] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.934305463Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=16.700219ms kafka | [2024-01-14 18:49:59,362] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.938702235Z level=info msg="Executing migration" id="Create annotation_tag table v3" kafka | [2024-01-14 18:49:59,363] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | > upgrade 0770-toscarequirement.sql grafana | logger=migrator t=2024-01-14T18:49:51.939385329Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=683.194µs kafka | [2024-01-14 18:49:59,363] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.943503692Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" kafka | [2024-01-14 18:49:59,364] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.944360252Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=856.139µs kafka | [2024-01-14 18:49:59,367] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.949132207Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" kafka | [2024-01-14 18:49:59,368] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.949417057Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=282.34µs kafka | [2024-01-14 18:49:59,369] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.955904032Z level=info msg="Executing migration" id="drop table annotation_tag_v2" kafka | [2024-01-14 18:49:59,373] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) policy-pap | security.providers = null policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-01-14T18:49:51.956589456Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=688.544µs kafka | [2024-01-14 18:49:59,373] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.965465344Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" kafka | [2024-01-14 18:49:59,373] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) policy-pap | session.timeout.ms = 45000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-01-14T18:49:51.965772355Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=309.041µs kafka | [2024-01-14 18:49:59,374] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.970439347Z level=info msg="Executing migration" id="Add created time to annotation table" kafka | [2024-01-14 18:49:59,376] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.974766547Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.32681ms kafka | [2024-01-14 18:49:59,387] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) policy-pap | ssl.cipher.suites = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.981641366Z level=info msg="Executing migration" id="Add updated time to annotation table" kafka | [2024-01-14 18:49:59,456] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql grafana | logger=migrator t=2024-01-14T18:49:51.985951235Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.302069ms kafka | [2024-01-14 18:49:59,511] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.992063648Z level=info msg="Executing migration" id="Add index for created in annotation table" kafka | [2024-01-14 18:49:59,540] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) policy-pap | ssl.engine.factory.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-14 18:50:04,388] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:51.992961899Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=897.481µs policy-pap | ssl.key.password = null kafka | [2024-01-14 18:50:04,389] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.996336286Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-14 18:50:27,156] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:51.997230577Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=894.171µs policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-01-14 18:50:27,158] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql grafana | logger=migrator t=2024-01-14T18:49:52.004844811Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-pap | ssl.keystore.key = null kafka | [2024-01-14 18:50:27,157] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.0051117Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=344.482µs policy-pap | ssl.keystore.location = null kafka | [2024-01-14 18:50:27,162] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) grafana | logger=migrator t=2024-01-14T18:49:52.012094264Z level=info msg="Executing migration" id="Add epoch_end column" policy-pap | ssl.keystore.password = null policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,195] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(997CZFJsQRqK8n_DtEBEGQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(kjWokfaySVa_GmTM7B9tfA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-14T18:49:52.016457796Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.357762ms policy-pap | ssl.keystore.type = JKS policy-db-migrator | kafka | [2024-01-14 18:50:27,196] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-14T18:49:52.020248358Z level=info msg="Executing migration" id="Add index for epoch_end" policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | kafka | [2024-01-14 18:50:27,198] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.021295504Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.051496ms policy-pap | ssl.provider = null policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql kafka | [2024-01-14 18:50:27,198] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.026926151Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-pap | ssl.secure.random.implementation = null policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,198] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.027176439Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=252.438µs policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-14 18:50:27,198] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.03091329Z level=info msg="Executing migration" id="Move region to single row" policy-pap | ssl.truststore.certificates = null policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,198] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.031720938Z level=info msg="Migration successfully executed" id="Move region to single row" duration=810.579µs policy-pap | ssl.truststore.location = null policy-db-migrator | kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.03523402Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-pap | ssl.truststore.password = null policy-db-migrator | kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.036265236Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.033376ms policy-pap | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0820-toscatrigger.sql kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.042145091Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.043157337Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.015756ms policy-pap | policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.047904122Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-pap | [2024-01-14T18:50:26.692+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.048930328Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.027586ms policy-pap | [2024-01-14T18:50:26.692+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-14T18:50:26.692+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258226692 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.059618691Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-14T18:50:26.693+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.06447067Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=4.85123ms policy-pap | [2024-01-14T18:50:26.693+00:00|INFO|ServiceManager|main] Policy PAP starting topics kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | [2024-01-14T18:50:26.693+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=0937c149-2aa8-46a7-9e1a-1a04d55e7141, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.071734253Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-pap | [2024-01-14T18:50:26.693+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9f04366a-9b2f-4312-96e1-33019febbf8b, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.072752239Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.021976ms policy-pap | [2024-01-14T18:50:26.693+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a8bb7a6b-5e9b-4ca3-97a9-6ae245781b45, alive=false, publisher=null]]: starting kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) grafana | logger=migrator t=2024-01-14T18:49:52.07680399Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" policy-pap | [2024-01-14T18:50:26.708+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.077933969Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.132939ms policy-pap | acks = -1 kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.082573731Z level=info msg="Executing migration" id="Increase tags column to length 4096" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.082683205Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=111.534µs policy-pap | batch.size = 16384 policy-db-migrator | kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.088759077Z level=info msg="Executing migration" id="create test_data table" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.090140945Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.383038ms policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.096050641Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.097781092Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.730361ms policy-pap | client.id = producer-1 policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.111095786Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-pap | compression.type = none policy-db-migrator | kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.112212755Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.118039ms policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.119507009Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.121281601Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.774992ms policy-pap | enable.idempotence = true policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.127117685Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-pap | interceptor.classes = [] kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) grafana | logger=migrator t=2024-01-14T18:49:52.127469487Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=351.472µs policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.130666719Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-pap | linger.ms = 0 kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.131143035Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=475.907µs policy-pap | max.block.ms = 60000 kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.133845869Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-14 18:50:27,199] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql grafana | logger=migrator t=2024-01-14T18:49:52.133958523Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=112.244µs policy-pap | max.request.size = 1048576 kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.146908505Z level=info msg="Executing migration" id="create team table" policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-01-14T18:49:52.147678972Z level=info msg="Migration successfully executed" id="create team table" duration=769.917µs kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-14T18:49:52.158972076Z level=info msg="Executing migration" id="add index team.org_id" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-14T18:49:52.159978901Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.007105ms kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-14T18:49:52.163896117Z level=info msg="Executing migration" id="add unique index team_org_id_name" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:52.164969915Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.073198ms kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-01-14T18:49:52.169905397Z level=info msg="Executing migration" id="Add column uid in team" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-01-14T18:49:52.175334066Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.428489ms kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | partitioner.class = null grafana | logger=migrator t=2024-01-14T18:49:52.179099608Z level=info msg="Executing migration" id="Update uid column values in team" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-01-14T18:49:52.179427519Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=325.192µs kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-01-14T18:49:52.183444629Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-14T18:49:52.184405063Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=960.853µs kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-14T18:49:52.189867463Z level=info msg="Executing migration" id="create team member table" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:52.190666951Z level=info msg="Migration successfully executed" id="create team member table" duration=798.978µs kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | retries = 2147483647 grafana | logger=migrator t=2024-01-14T18:49:52.198722122Z level=info msg="Executing migration" id="add index team_member.org_id" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-14T18:49:52.200214444Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.492732ms kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-14T18:49:52.209440996Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" kafka | [2024-01-14 18:50:27,200] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-14T18:49:52.211007981Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.566574ms kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-14T18:49:52.216376218Z level=info msg="Executing migration" id="add index team_member.team_id" kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) grafana | logger=migrator t=2024-01-14T18:49:52.217941472Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.564304ms kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.223653531Z level=info msg="Executing migration" id="Add column email to team table" kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.228351955Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.695854ms kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.232093656Z level=info msg="Executing migration" id="Add column external to team_member table" kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-01-14T18:49:52.236930875Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.836879ms kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.242613713Z level=info msg="Executing migration" id="Add column permission to team_member table" kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-01-14T18:49:52.247625778Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.012574ms kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.251172991Z level=info msg="Executing migration" id="create dashboard acl table" kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.251823904Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=651.173µs kafka | [2024-01-14 18:50:27,205] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.257198622Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-01-14T18:49:52.259337996Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.981189ms kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.265496211Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-01-14T18:49:52.266971512Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.475171ms kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.271625605Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.27263764Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.012275ms kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.288094439Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-01-14T18:49:52.289989155Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.893386ms kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.299347121Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-01-14T18:49:52.300638937Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.294105ms kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.309160324Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.309882199Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=722.095µs kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.314374496Z level=info msg="Executing migration" id="add index dashboard_permission" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-01-14T18:49:52.315618269Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.245083ms kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.327541205Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-01-14T18:49:52.328416405Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=877.721µs kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.334069783Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.334550669Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=478.576µs kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | security.providers = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.33972745Z level=info msg="Executing migration" id="create tag table" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-01-14T18:49:52.340504937Z level=info msg="Migration successfully executed" id="create tag table" duration=777.607µs kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.356418112Z level=info msg="Executing migration" id="add index tag.key_value" kafka | [2024-01-14 18:50:27,206] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) grafana | logger=migrator t=2024-01-14T18:49:52.358199354Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.781032ms kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.373095913Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.37414511Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.112849ms policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.378044836Z level=info msg="Executing migration" id="add index login_attempt.username" policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-14T18:49:52.379029961Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=985.085µs kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-14T18:49:52.384554123Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-14T18:49:52.386024055Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.472121ms kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-14T18:49:52.393901119Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-14T18:49:52.417412649Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=23.50816ms kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-14T18:49:52.424094702Z level=info msg="Executing migration" id="create login_attempt v2" kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-14T18:49:52.425801872Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.70799ms kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-14T18:49:52.433896174Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-14T18:49:52.434959221Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.061077ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.provider = null grafana | logger=migrator t=2024-01-14T18:49:52.441440277Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-db-migrator | kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-14T18:49:52.441760268Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=319.981µs policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-14T18:49:52.445903253Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql kafka | [2024-01-14 18:50:27,207] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-14T18:49:52.446994621Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.086248ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,208] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-14T18:49:52.454064528Z level=info msg="Executing migration" id="create user auth table" policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-14 18:50:27,208] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-01-14T18:49:52.454869046Z level=info msg="Migration successfully executed" id="create user auth table" duration=806.479µs policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,208] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-14T18:49:52.461792857Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-db-migrator | kafka | [2024-01-14 18:50:27,208] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-01-14T18:49:52.462804612Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.013835ms policy-db-migrator | kafka | [2024-01-14 18:50:27,208] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | transactional.id = null grafana | logger=migrator t=2024-01-14T18:49:52.47075824Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-01-14 18:50:27,208] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | -------------- policy-pap | grafana | logger=migrator t=2024-01-14T18:49:52.470871984Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=117.824µs kafka | [2024-01-14 18:50:27,363] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-01-14T18:50:26.717+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-01-14T18:49:52.483708332Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" kafka | [2024-01-14 18:50:27,363] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | [2024-01-14T18:50:26.733+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-14T18:49:52.490987875Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.276544ms kafka | [2024-01-14 18:50:27,363] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:26.733+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-14T18:49:52.497308856Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" kafka | [2024-01-14 18:50:27,363] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | [2024-01-14T18:50:26.733+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258226733 grafana | logger=migrator t=2024-01-14T18:49:52.505031285Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=7.72423ms kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | [2024-01-14T18:50:26.733+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a8bb7a6b-5e9b-4ca3-97a9-6ae245781b45, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-01-14T18:49:52.509851883Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | [2024-01-14T18:50:26.733+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3c8f48ad-e010-4e3f-84a3-cd5b3c84f5b5, alive=false, publisher=null]]: starting kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:26.734+00:00|INFO|ProducerConfig|main] ProducerConfig values: grafana | logger=migrator t=2024-01-14T18:49:52.515430608Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.579125ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-14T18:49:52.525149807Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | acks = -1 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.531737467Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=6.58399ms kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-01-14T18:49:52.536807723Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-pap | batch.size = 16384 policy-db-migrator | kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.537750766Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=943.093µs policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.54272329Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-pap | buffer.memory = 33554432 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.54903804Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.31666ms policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.554798061Z level=info msg="Executing migration" id="create server_lock table" policy-pap | client.id = producer-2 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | compression.type = none policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.555359421Z level=info msg="Migration successfully executed" id="create server_lock table" duration=561.489µs policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.563593888Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.56452863Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=934.932µs policy-pap | enable.idempotence = true policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.571041428Z level=info msg="Executing migration" id="create user auth token table" policy-pap | interceptor.classes = [] kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.57226259Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.219723ms policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-14T18:49:52.577223853Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | linger.ms = 0 kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.578819039Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.595546ms policy-pap | max.block.ms = 60000 kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.583676748Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.584636572Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=958.063µs policy-pap | max.request.size = 1048576 kafka | [2024-01-14 18:50:27,364] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-14 18:50:27,365] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.588140484Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,365] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.589150679Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.010105ms policy-pap | metadata.max.idle.ms = 300000 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.594204995Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-14T18:49:52.599886594Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.680739ms policy-db-migrator | policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-14T18:49:52.605478608Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" policy-db-migrator | kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-14T18:49:52.606529765Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.049217ms policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:52.616085188Z level=info msg="Executing migration" id="create cache_data table" policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-01-14T18:49:52.616945919Z level=info msg="Migration successfully executed" id="create cache_data table" duration=861.52µs policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-01-14T18:49:52.621817119Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-01-14T18:49:52.622821634Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.002335ms policy-db-migrator | kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | partitioner.ignore.keys = false grafana | logger=migrator t=2024-01-14T18:49:52.62760312Z level=info msg="Executing migration" id="create short_url table v1" policy-db-migrator | kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | receive.buffer.bytes = 32768 grafana | logger=migrator t=2024-01-14T18:49:52.628363687Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=763.907µs policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.633936711Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-14T18:49:52.634927736Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=990.635µs kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.639457514Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | retries = 2147483647 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.639523586Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=66.693µs kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.647997002Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-01-14T18:49:52.648135556Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=139.185µs kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.65484934Z level=info msg="Executing migration" id="recreate alert_definition table" policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-14T18:49:52.656078913Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.233023ms policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.668076462Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.669031725Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=954.903µs policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.675583594Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0100-pdp.sql grafana | logger=migrator t=2024-01-14T18:49:52.676756665Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.176171ms policy-pap | sasl.login.callback.handler.class = null kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.680197914Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" policy-pap | sasl.login.class = null kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY grafana | logger=migrator t=2024-01-14T18:49:52.680266597Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=69.293µs policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-14 18:50:27,366] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.689518429Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.691128686Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.610677ms policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.699652903Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-01-14T18:49:52.701447716Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.793532ms policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.70643068Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-01-14T18:49:52.707988494Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.554094ms policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.717813567Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.719577628Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.766832ms policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.723483474Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql grafana | logger=migrator t=2024-01-14T18:49:52.730383345Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.901241ms policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-01-14 18:50:27,369] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.736058893Z level=info msg="Executing migration" id="drop alert_definition table" policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY grafana | logger=migrator t=2024-01-14T18:49:52.737582736Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.523443ms policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.743854415Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.74399933Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=146.185µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.747969268Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | > upgrade 0130-pdpstatistics.sql kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:52.748757286Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=788.018µs policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-14T18:49:52.756551177Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-14T18:49:52.758959221Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.406864ms kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-14T18:49:52.764011678Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" kafka | [2024-01-14 18:50:27,370] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | policy-pap | security.providers = null grafana | logger=migrator t=2024-01-14T18:49:52.765735338Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.722661ms kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-14T18:49:52.769321973Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-14T18:49:52.769387955Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=66.692µs kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-14T18:49:52.779649673Z level=info msg="Executing migration" id="drop alert_definition_version table" kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | ssl.cipher.suites = null grafana | logger=migrator t=2024-01-14T18:49:52.781210908Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.559865ms kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-14T18:49:52.78815744Z level=info msg="Executing migration" id="create alert_instance table" kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-14T18:49:52.789506987Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.345077ms kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-14T18:49:52.79330522Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" kafka | [2024-01-14 18:50:27,371] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-14T18:49:52.794842963Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.537293ms kafka | [2024-01-14 18:50:27,372] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-14T18:49:52.804022043Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" kafka | [2024-01-14 18:50:27,372] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-14T18:49:52.805540566Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.518173ms kafka | [2024-01-14 18:50:27,372] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-14T18:49:52.809119201Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" kafka | [2024-01-14 18:50:27,372] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-14T18:49:52.815219534Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.100993ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,372] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-14T18:49:52.820304081Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL kafka | [2024-01-14 18:50:27,372] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.821275455Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=971.444µs kafka | [2024-01-14 18:50:27,372] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.828267179Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.829780862Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.511112ms kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-14T18:49:52.833435929Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.874344476Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=40.906857ms kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-pap | ssl.truststore.certificates = null policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME grafana | logger=migrator t=2024-01-14T18:49:52.886931815Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) policy-pap | ssl.truststore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.927049804Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=40.114219ms kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) policy-pap | ssl.truststore.password = null policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.931158217Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) policy-pap | ssl.truststore.type = JKS policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.931878362Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=720.355µs kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-pap | transaction.timeout.ms = 60000 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-14T18:49:52.935001952Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) policy-pap | transactional.id = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.935850391Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=847.14µs kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-db-migrator | UPDATE jpapdpstatistics_enginestats a grafana | logger=migrator t=2024-01-14T18:49:52.946108919Z level=info msg="Executing migration" id="add current_reason column related to current_state" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) policy-pap | policy-db-migrator | JOIN pdpstatistics b grafana | logger=migrator t=2024-01-14T18:49:52.953265748Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=7.155019ms kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) policy-pap | [2024-01-14T18:50:26.734+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp grafana | logger=migrator t=2024-01-14T18:49:52.958151719Z level=info msg="Executing migration" id="create alert_rule table" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-01-14T18:50:26.737+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-db-migrator | SET a.id = b.id grafana | logger=migrator t=2024-01-14T18:49:52.958932566Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=780.587µs kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) policy-pap | [2024-01-14T18:50:26.737+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.962954206Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-01-14T18:50:26.737+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705258226737 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.963937591Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=983.005µs kafka | [2024-01-14 18:50:27,373] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-01-14T18:50:26.737+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3c8f48ad-e010-4e3f-84a3-cd5b3c84f5b5, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.970201319Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" kafka | [2024-01-14 18:50:27,381] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) policy-pap | [2024-01-14T18:50:26.737+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-14T18:49:52.971141992Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=940.213µs kafka | [2024-01-14 18:50:27,385] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) policy-pap | [2024-01-14T18:50:26.737+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:26.739+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-01-14T18:49:52.974117916Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" policy-pap | [2024-01-14T18:50:26.740+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.975127861Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.009205ms policy-pap | [2024-01-14T18:50:26.742+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.978426396Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" policy-pap | [2024-01-14T18:50:26.744+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:52.978493968Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=65.222µs policy-pap | [2024-01-14T18:50:26.745+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-01-14T18:49:52.983943288Z level=info msg="Executing migration" id="add column for to alert_rule" policy-pap | [2024-01-14T18:50:26.745+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:52.990067042Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.123784ms policy-pap | [2024-01-14T18:50:26.745+00:00|INFO|TimerManager|Thread-9] timer manager update started kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) grafana | logger=migrator t=2024-01-14T18:49:52.997095107Z level=info msg="Executing migration" id="add column annotations to alert_rule" policy-pap | [2024-01-14T18:50:26.746+00:00|INFO|ServiceManager|main] Policy PAP started kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.005968136Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=8.835057ms policy-pap | [2024-01-14T18:50:26.749+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.448 seconds (process running for 11.105) kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.00926509Z level=info msg="Executing migration" id="add column labels to alert_rule" policy-pap | [2024-01-14T18:50:26.749+00:00|INFO|TimerManager|Thread-10] timer manager state-change started kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.015434793Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.169633ms policy-pap | [2024-01-14T18:50:27.142+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: 0Gs_niWkQtyT_H8dS3neSw kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql grafana | logger=migrator t=2024-01-14T18:49:53.018399106Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" policy-pap | [2024-01-14T18:50:27.144+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.01937668Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=977.934µs policy-pap | [2024-01-14T18:50:27.144+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: 0Gs_niWkQtyT_H8dS3neSw kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) grafana | logger=migrator t=2024-01-14T18:49:53.02749989Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" policy-pap | [2024-01-14T18:50:27.145+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Cluster ID: 0Gs_niWkQtyT_H8dS3neSw kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.028537726Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.037736ms policy-pap | [2024-01-14T18:50:27.170+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.036652286Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" policy-pap | [2024-01-14T18:50:27.170+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.046724823Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=10.073457ms policy-pap | [2024-01-14T18:50:27.184+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0210-sequence.sql grafana | logger=migrator t=2024-01-14T18:49:53.055961662Z level=info msg="Executing migration" id="add panel_id column to alert_rule" policy-pap | [2024-01-14T18:50:27.184+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: 0Gs_niWkQtyT_H8dS3neSw kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.066436003Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=10.476822ms policy-pap | [2024-01-14T18:50:27.251+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-01-14T18:49:53.070165771Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" policy-pap | [2024-01-14T18:50:27.320+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.070887756Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=721.645µs policy-pap | [2024-01-14T18:50:27.365+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.077324818Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.436+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.085107407Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.783319ms kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.478+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0220-sequence.sql grafana | logger=migrator t=2024-01-14T18:49:53.088286096Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | [2024-01-14 18:50:27,387] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.544+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.094301904Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.015398ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.583+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-01-14T18:49:53.099679099Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.653+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.099748612Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=70.792µs kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.690+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.10493092Z level=info msg="Executing migration" id="create alert_rule_version table" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.757+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.105985626Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.052426ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.797+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql grafana | logger=migrator t=2024-01-14T18:49:53.109484687Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.862+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.111198996Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.713739ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.902+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) grafana | logger=migrator t=2024-01-14T18:49:53.12608123Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:27.971+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.128159041Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.065831ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.012+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.13333558Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.081+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.133519466Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=185.856µs kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.123+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql grafana | logger=migrator t=2024-01-14T18:49:53.137613317Z level=info msg="Executing migration" id="add column for to alert_rule_version" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.128+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] (Re-)joining group policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.143841642Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.227955ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.152+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Request joining group due to: need to re-join with the given member-id: consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3-7bf3d7b2-ab7f-4b21-a745-eed28010975b policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) grafana | logger=migrator t=2024-01-14T18:49:53.150211582Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.152+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.15741916Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=7.207798ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.152+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] (Re-)joining group policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.161582714Z level=info msg="Executing migration" id="add column labels to alert_rule_version" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.190+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.169032401Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.449217ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.192+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | > upgrade 0120-toscatrigger.sql grafana | logger=migrator t=2024-01-14T18:49:53.175901118Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-83ddcd15-11f5-4fb2-8b47-2df21c19e82b policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.182425563Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.525854ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:28.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | DROP TABLE IF EXISTS toscatrigger grafana | logger=migrator t=2024-01-14T18:49:53.186799914Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-pap | [2024-01-14T18:50:28.197+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.193814405Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.010962ms policy-pap | [2024-01-14T18:50:31.186+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Successfully joined group with generation Generation{generationId=1, memberId='consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3-7bf3d7b2-ab7f-4b21-a745-eed28010975b', protocol='range'} kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.202569768Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-pap | [2024-01-14T18:50:31.194+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Finished assignment for group at generation 1: {consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3-7bf3d7b2-ab7f-4b21-a745-eed28010975b=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.202723473Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=158.846µs policy-pap | [2024-01-14T18:50:31.203+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-83ddcd15-11f5-4fb2-8b47-2df21c19e82b', protocol='range'} kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql grafana | logger=migrator t=2024-01-14T18:49:53.214753228Z level=info msg="Executing migration" id=create_alert_configuration_table policy-pap | [2024-01-14T18:50:31.204+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-83ddcd15-11f5-4fb2-8b47-2df21c19e82b=Assignment(partitions=[policy-pdp-pap-0])} kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.216076243Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.326256ms policy-pap | [2024-01-14T18:50:31.221+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Successfully synced group in generation Generation{generationId=1, memberId='consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3-7bf3d7b2-ab7f-4b21-a745-eed28010975b', protocol='range'} kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB grafana | logger=migrator t=2024-01-14T18:49:53.223610903Z level=info msg="Executing migration" id="Add column default in alert_configuration" kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:31.221+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-83ddcd15-11f5-4fb2-8b47-2df21c19e82b', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.230798731Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.186428ms kafka | [2024-01-14 18:50:27,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | [2024-01-14T18:50:31.221+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.234707656Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" kafka | [2024-01-14 18:50:27,388] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | [2024-01-14T18:50:31.221+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.234815089Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=108.983µs kafka | [2024-01-14 18:50:27,397] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | [2024-01-14T18:50:31.226+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | > upgrade 0140-toscaparameter.sql grafana | logger=migrator t=2024-01-14T18:49:53.241330644Z level=info msg="Executing migration" id="add column org_id in alert_configuration" kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:31.226+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.250138818Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=8.807924ms kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:31.248+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter grafana | logger=migrator t=2024-01-14T18:49:53.254183158Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:31.249+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.255669929Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.487972ms kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:31.270+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.259829522Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:31.270+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3, groupId=9f04366a-9b2f-4312-96e1-33019febbf8b] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.266547164Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.717792ms kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:33.679+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | > upgrade 0150-toscaproperty.sql grafana | logger=migrator t=2024-01-14T18:49:53.279840382Z level=info msg="Executing migration" id=create_ngalert_configuration_table kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:33.680+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.281127397Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.285434ms kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:33.682+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints grafana | logger=migrator t=2024-01-14T18:49:53.284951138Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" kafka | [2024-01-14 18:50:27,398] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.447+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.286116119Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.166781ms kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [] policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.290040144Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.448+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.29747562Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.433246ms kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c64813f5-5d15-4f56-acff-79be224de4a1","timestampMs":1705258248410,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata grafana | logger=migrator t=2024-01-14T18:49:53.305226177Z level=info msg="Executing migration" id="create provenance_type table" kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.448+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.306561294Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.334856ms kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"c64813f5-5d15-4f56-acff-79be224de4a1","timestampMs":1705258248410,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.312036933Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.458+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.313827634Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.790582ms kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.566+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting policy-db-migrator | DROP TABLE IF EXISTS toscaproperty grafana | logger=migrator t=2024-01-14T18:49:53.318173154Z level=info msg="Executing migration" id="create alert_image table" policy-pap | [2024-01-14T18:50:48.566+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting listener kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.318960551Z level=info msg="Migration successfully executed" id="create alert_image table" duration=787.687µs policy-pap | [2024-01-14T18:50:48.567+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting timer kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.323630112Z level=info msg="Executing migration" id="add unique index on token to alert_image table" policy-pap | [2024-01-14T18:50:48.567+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=757313d4-7af4-4931-9ac2-fd2d2926c923, expireMs=1705258278567] kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.324959458Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.325776ms policy-pap | [2024-01-14T18:50:48.569+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=757313d4-7af4-4931-9ac2-fd2d2926c923, expireMs=1705258278567] kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql grafana | logger=migrator t=2024-01-14T18:49:53.335101218Z level=info msg="Executing migration" id="support longer URLs in alert_image table" policy-pap | [2024-01-14T18:50:48.569+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting enqueue kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.335398048Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=300.23µs policy-pap | [2024-01-14T18:50:48.570+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate started kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY grafana | logger=migrator t=2024-01-14T18:49:53.341375094Z level=info msg="Executing migration" id=create_alert_configuration_history_table policy-pap | [2024-01-14T18:50:48.571+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.342525144Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.14823ms policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"757313d4-7af4-4931-9ac2-fd2d2926c923","timestampMs":1705258248548,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.352873011Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" policy-pap | [2024-01-14T18:50:48.615+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"757313d4-7af4-4931-9ac2-fd2d2926c923","timestampMs":1705258248548,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-14T18:49:53.355250983Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=2.376552ms kafka | [2024-01-14 18:50:27,399] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-pap | [2024-01-14T18:50:48.615+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-14T18:49:53.360036338Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"757313d4-7af4-4931-9ac2-fd2d2926c923","timestampMs":1705258248548,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-14T18:49:53.360532225Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-14T18:50:48.616+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-14T18:49:53.365525537Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-14T18:50:48.617+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-14T18:49:53.366013334Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=487.867µs kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-pap | [2024-01-14T18:50:48.633+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-14T18:49:53.371628728Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ad808b82-ffd2-475d-8e29-60ac57d7e4a3","timestampMs":1705258248625,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-01-14T18:49:53.373616146Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.985978ms policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | [2024-01-14T18:50:48.634+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus grafana | logger=migrator t=2024-01-14T18:49:53.378862787Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:48.638+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-14T18:49:53.389190953Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=10.328156ms kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ad808b82-ffd2-475d-8e29-60ac57d7e4a3","timestampMs":1705258248625,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-01-14T18:49:53.401294301Z level=info msg="Executing migration" id="create library_element table v1" policy-db-migrator | kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.643+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-14T18:49:53.402177021Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=883.03µs policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.410443276Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"757313d4-7af4-4931-9ac2-fd2d2926c923","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3297215-5c2f-4719-8d17-679689175ab0","timestampMs":1705258248635,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.41228088Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.840464ms policy-pap | [2024-01-14T18:50:48.664+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.416035239Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-pap | [2024-01-14T18:50:48.664+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping enqueue policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.416758434Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=723.185µs kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.664+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping timer policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.422874905Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.664+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=757313d4-7af4-4931-9ac2-fd2d2926c923, expireMs=1705258278567] policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-01-14T18:49:53.424729219Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.853714ms kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.664+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping listener policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.42880989Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.664+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopped policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT grafana | logger=migrator t=2024-01-14T18:49:53.430539009Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.728359ms kafka | [2024-01-14 18:50:27,400] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.668+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.435512611Z level=info msg="Executing migration" id="increase max description length to 2048" kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"757313d4-7af4-4931-9ac2-fd2d2926c923","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f3297215-5c2f-4719-8d17-679689175ab0","timestampMs":1705258248635,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.435553302Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=41.971µs kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.669+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 757313d4-7af4-4931-9ac2-fd2d2926c923 policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.446621294Z level=info msg="Executing migration" id="alter library_element model to mediumtext" kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.669+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate successful policy-db-migrator | > upgrade 0100-upgrade.sql grafana | logger=migrator t=2024-01-14T18:49:53.446827351Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=206.357µs kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 start publishing next request policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.452639882Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange starting policy-db-migrator | select 'upgrade to 1100 completed' as msg grafana | logger=migrator t=2024-01-14T18:49:53.453093857Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=454.006µs kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange starting listener policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.458145401Z level=info msg="Executing migration" id="create data_keys table" kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange starting timer policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.459614802Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.470061ms kafka | [2024-01-14 18:50:27,401] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=9e5e6847-a4b6-4311-8c93-d24747cde7bf, expireMs=1705258278670] policy-db-migrator | msg grafana | logger=migrator t=2024-01-14T18:49:53.466957525Z level=info msg="Executing migration" id="create secrets table" kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange starting enqueue policy-db-migrator | upgrade to 1100 completed grafana | logger=migrator t=2024-01-14T18:49:53.467863316Z level=info msg="Migration successfully executed" id="create secrets table" duration=906.511µs kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange started policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.47233118Z level=info msg="Executing migration" id="rename data_keys name column to id" kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=9e5e6847-a4b6-4311-8c93-d24747cde7bf, expireMs=1705258278670] policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.521218457Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=48.884366ms policy-pap | [2024-01-14T18:50:48.670+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.533346675Z level=info msg="Executing migration" id="add name column into data_keys" policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","timestampMs":1705258248549,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.538699519Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.355784ms policy-pap | [2024-01-14T18:50:48.681+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.548681094Z level=info msg="Executing migration" id="copy data_keys id column values into name" policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","timestampMs":1705258248549,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.548832109Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=151.756µs policy-pap | [2024-01-14T18:50:48.682+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.554275216Z level=info msg="Executing migration" id="rename data_keys name column to label" policy-pap | [2024-01-14T18:50:48.691+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.603547346Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=49.265769ms policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5864a826-fdd3-49c3-980a-ed15355aced0","timestampMs":1705258248682,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.610150983Z level=info msg="Executing migration" id="rename data_keys id column back to name" policy-pap | [2024-01-14T18:50:48.691+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9e5e6847-a4b6-4311-8c93-d24747cde7bf policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-01-14T18:49:53.661520015Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=51.364321ms kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) policy-pap | [2024-01-14T18:50:48.708+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.671357314Z level=info msg="Executing migration" id="create kv_store table v1" kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","timestampMs":1705258248549,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.672650629Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.293545ms kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | [2024-01-14T18:50:48.708+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.679974311Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" policy-pap | [2024-01-14T18:50:48.711+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) grafana | logger=migrator t=2024-01-14T18:49:53.681758983Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.783702ms kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"9e5e6847-a4b6-4311-8c93-d24747cde7bf","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"5864a826-fdd3-49c3-980a-ed15355aced0","timestampMs":1705258248682,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.688365271Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange stopping policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.688575688Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=211.348µs kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange stopping enqueue policy-db-migrator | grafana | logger=migrator t=2024-01-14T18:49:53.695363142Z level=info msg="Executing migration" id="create permission table" kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange stopping timer policy-db-migrator | > upgrade 0120-audit_sequence.sql grafana | logger=migrator t=2024-01-14T18:49:53.69616812Z level=info msg="Migration successfully executed" id="create permission table" duration=805.588µs kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=9e5e6847-a4b6-4311-8c93-d24747cde7bf, expireMs=1705258278670] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.701413451Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange stopping listener policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-01-14T18:49:53.705551783Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=4.139473ms kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange stopped policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-14T18:49:53.712285906Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpStateChange successful policy-db-migrator | kafka | [2024-01-14 18:50:27,440] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.714135699Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.849364ms policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 start publishing next request policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.719449863Z level=info msg="Executing migration" id="create role table" policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.720759088Z level=info msg="Migration successfully executed" id="create role table" duration=1.309366ms policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting listener policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.724265829Z level=info msg="Executing migration" id="add column display_name" policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting timer policy-db-migrator | kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.732869866Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.602446ms policy-db-migrator | kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=020754a9-8ac4-4079-a8c5-daad36a9cd64, expireMs=1705258278712] grafana | logger=migrator t=2024-01-14T18:49:53.738995137Z level=info msg="Executing migration" id="add column group_name" policy-db-migrator | > upgrade 0130-statistics_sequence.sql kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate starting enqueue grafana | logger=migrator t=2024-01-14T18:49:53.747939035Z level=info msg="Migration successfully executed" id="add column group_name" duration=8.943528ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate started grafana | logger=migrator t=2024-01-14T18:49:53.75649344Z level=info msg="Executing migration" id="add index role.org_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | [2024-01-14T18:50:48.712+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-14T18:49:53.757552876Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.059916ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"020754a9-8ac4-4079-a8c5-daad36a9cd64","timestampMs":1705258248699,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-14T18:49:53.76575752Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-db-migrator | kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | [2024-01-14T18:50:48.719+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-14T18:49:53.767712587Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.956168ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"020754a9-8ac4-4079-a8c5-daad36a9cd64","timestampMs":1705258248699,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-14T18:49:53.771808428Z level=info msg="Executing migration" id="add index role_org_id_uid" policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | [2024-01-14T18:50:48.719+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-14T18:49:53.773353182Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.545754ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | [2024-01-14T18:50:48.720+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-14T18:49:53.778340294Z level=info msg="Executing migration" id="create team role table" policy-db-migrator | kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | {"source":"pap-02b17f46-57cf-4d07-81c5-acaf2d49e437","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"020754a9-8ac4-4079-a8c5-daad36a9cd64","timestampMs":1705258248699,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-14T18:49:53.779158062Z level=info msg="Migration successfully executed" id="create team role table" duration=817.779µs policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | [2024-01-14T18:50:48.720+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-14T18:49:53.785983517Z level=info msg="Executing migration" id="add index team_role.org_id" policy-db-migrator | TRUNCATE TABLE sequence kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | [2024-01-14T18:50:48.728+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-14T18:49:53.787543561Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.562074ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"020754a9-8ac4-4079-a8c5-daad36a9cd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d0d1911d-ca26-4d08-90e7-7571bfeb1817","timestampMs":1705258248723,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-14T18:49:53.797128131Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" policy-db-migrator | kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | [2024-01-14T18:50:48.729+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 020754a9-8ac4-4079-a8c5-daad36a9cd64 grafana | logger=migrator t=2024-01-14T18:49:53.799462862Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.333961ms policy-db-migrator | kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | [2024-01-14T18:50:48.729+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-14T18:49:53.80432294Z level=info msg="Executing migration" id="add index team_role.team_id" policy-db-migrator | > upgrade 0100-pdpstatistics.sql kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"020754a9-8ac4-4079-a8c5-daad36a9cd64","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d0d1911d-ca26-4d08-90e7-7571bfeb1817","timestampMs":1705258248723,"name":"apex-cd928c6f-79bf-459b-85d5-8c948d667a25","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-14T18:49:53.805559652Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.237283ms policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | [2024-01-14T18:50:48.730+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping grafana | logger=migrator t=2024-01-14T18:49:53.811736855Z level=info msg="Executing migration" id="create user role table" policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | [2024-01-14T18:50:48.730+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping enqueue grafana | logger=migrator t=2024-01-14T18:49:53.812653587Z level=info msg="Migration successfully executed" id="create user role table" duration=915.942µs policy-db-migrator | -------------- kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | [2024-01-14T18:50:48.730+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping timer grafana | logger=migrator t=2024-01-14T18:49:53.818141666Z level=info msg="Executing migration" id="add index user_role.org_id" policy-db-migrator | kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-01-14T18:50:48.730+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=020754a9-8ac4-4079-a8c5-daad36a9cd64, expireMs=1705258278712] grafana | logger=migrator t=2024-01-14T18:49:53.81969217Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.548373ms policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:48.730+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopping listener kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.829145726Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" policy-db-migrator | DROP TABLE pdpstatistics policy-pap | [2024-01-14T18:50:48.730+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate stopped kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.830837944Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.692678ms policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:48.734+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 PdpUpdate successful kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.836423646Z level=info msg="Executing migration" id="add index user_role.user_id" policy-db-migrator | policy-pap | [2024-01-14T18:50:48.734+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-cd928c6f-79bf-459b-85d5-8c948d667a25 has no more requests kafka | [2024-01-14 18:50:27,441] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.837921438Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.496022ms policy-db-migrator | policy-pap | [2024-01-14T18:50:54.334+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-01-14 18:50:27,443] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-01-14T18:49:53.841522383Z level=info msg="Executing migration" id="create builtin role table" policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-pap | [2024-01-14T18:50:54.342+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls kafka | [2024-01-14 18:50:27,443] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.842385322Z level=info msg="Migration successfully executed" id="create builtin role table" duration=862.98µs policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:54.725+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup kafka | [2024-01-14 18:50:27,499] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:53.846425021Z level=info msg="Executing migration" id="add index builtin_role.role_id" policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-pap | [2024-01-14T18:50:55.325+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup kafka | [2024-01-14 18:50:27,512] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:53.848218353Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.793172ms policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:55.325+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup kafka | [2024-01-14 18:50:27,514] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.855240196Z level=info msg="Executing migration" id="add index builtin_role.name" policy-db-migrator | policy-pap | [2024-01-14T18:50:55.867+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-01-14 18:50:27,516] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.85769615Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=2.454945ms policy-db-migrator | policy-pap | [2024-01-14T18:50:56.105+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 kafka | [2024-01-14 18:50:27,519] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.861785811Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-pap | [2024-01-14T18:50:56.226+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-01-14 18:50:27,542] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:53.870558854Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=8.773103ms policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:56.226+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup kafka | [2024-01-14 18:50:27,543] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:53.879555324Z level=info msg="Executing migration" id="add index builtin_role.org_id" policy-db-migrator | DROP TABLE statistics_sequence policy-pap | [2024-01-14T18:50:56.226+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup kafka | [2024-01-14 18:50:27,543] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.881684228Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=2.125483ms policy-db-migrator | -------------- policy-pap | [2024-01-14T18:50:56.238+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-14T18:50:56Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-14T18:50:56Z, user=policyadmin)] kafka | [2024-01-14 18:50:27,543] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.886193373Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" policy-db-migrator | policy-pap | [2024-01-14T18:50:56.935+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup kafka | [2024-01-14 18:50:27,543] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.887309301Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.118128ms policy-db-migrator | policyadmin: OK: upgrade (1300) policy-pap | [2024-01-14T18:50:56.936+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 kafka | [2024-01-14 18:50:27,551] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:53.891699813Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" policy-db-migrator | name version policy-pap | [2024-01-14T18:50:56.936+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 kafka | [2024-01-14 18:50:27,552] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:53.892796351Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.096898ms policy-db-migrator | policyadmin 1300 policy-pap | [2024-01-14T18:50:56.936+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup kafka | [2024-01-14 18:50:27,552] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.899366887Z level=info msg="Executing migration" id="add unique index role.uid" policy-db-migrator | ID script operation from_version to_version tag success atTime policy-pap | [2024-01-14T18:50:56.937+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup kafka | [2024-01-14 18:50:27,552] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.900488156Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.121709ms policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:56.948+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-14T18:50:56Z, user=policyadmin)] kafka | [2024-01-14 18:50:27,552] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.903878623Z level=info msg="Executing migration" id="create seed assignment table" policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:57.348+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup kafka | [2024-01-14 18:50:27,578] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:53.904657659Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=780.296µs policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:57.348+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup kafka | [2024-01-14 18:50:27,579] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:53.909159055Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:57.348+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 kafka | [2024-01-14 18:50:27,579] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.910273663Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.117488ms policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:57.348+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 kafka | [2024-01-14 18:50:27,579] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.918721115Z level=info msg="Executing migration" id="add column hidden to role table" policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:57.349+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup kafka | [2024-01-14 18:50:27,579] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.931596219Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=12.876315ms policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:57.349+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup kafka | [2024-01-14 18:50:27,590] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:53.935910708Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:50:57.360+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-14T18:50:57Z, user=policyadmin)] kafka | [2024-01-14 18:50:27,591] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:53.942313998Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.40246ms policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:51:17.928+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup kafka | [2024-01-14 18:50:27,591] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.945739116Z level=info msg="Executing migration" id="permission attribute migration" policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 policy-pap | [2024-01-14T18:51:17.931+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup kafka | [2024-01-14 18:50:27,591] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 grafana | logger=migrator t=2024-01-14T18:49:53.954066694Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.327647ms policy-pap | [2024-01-14T18:51:18.567+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=757313d4-7af4-4931-9ac2-fd2d2926c923, expireMs=1705258278567] kafka | [2024-01-14 18:50:27,591] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 grafana | logger=migrator t=2024-01-14T18:49:53.963146267Z level=info msg="Executing migration" id="permission identifier migration" policy-pap | [2024-01-14T18:51:18.670+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=9e5e6847-a4b6-4311-8c93-d24747cde7bf, expireMs=1705258278670] policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,602] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:53.971253236Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.105729ms policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,602] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:53.975224583Z level=info msg="Executing migration" id="add permission identifier index" policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,602] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.9762732Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.047956ms policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,602] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.983271201Z level=info msg="Executing migration" id="create query_history table v1" policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,603] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:53.984227784Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=956.812µs policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,611] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:53.991389621Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,612] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:53.993186323Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.795601ms policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:51 kafka | [2024-01-14 18:50:27,612] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.997234222Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,612] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:53.997345396Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=112.514µs policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,612] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.00591884Z level=info msg="Executing migration" id="rbac disabled migrator" policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,625] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.006035124Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=141.545µs policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,626] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.011322618Z level=info msg="Executing migration" id="teams permissions migration" policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,627] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.012136305Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=814.017µs policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,627] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.016696505Z level=info msg="Executing migration" id="dashboard permissions" policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,627] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.017340307Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=644.852µs policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,635] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.02136939Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,636] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.022034982Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=660.392µs policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,636] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.027513923Z level=info msg="Executing migration" id="drop managed folder create actions" policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,636] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.027868694Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=354.791µs policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,636] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.034838005Z level=info msg="Executing migration" id="alerting notification permissions" policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,645] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.035354602Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=517.767µs policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,646] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.0410472Z level=info msg="Executing migration" id="create query_history_star table v1" policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,646] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.042302361Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.252602ms policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,646] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.046553121Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,646] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.047624477Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.070976ms policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,654] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.054711241Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,655] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.065526218Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.814807ms policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,655] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.070693218Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,655] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.070759511Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=66.863µs policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:52 kafka | [2024-01-14 18:50:27,655] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.075418114Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,665] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.076334435Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=916.201µs policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,666] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.080628867Z level=info msg="Executing migration" id="add index correlations.uid" policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,666] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.082591991Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.962435ms policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,666] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.091984341Z level=info msg="Executing migration" id="add index correlations.source_uid" kafka | [2024-01-14 18:50:27,667] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.09315124Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.166779ms policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,675] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.102427456Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,676] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.115763197Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.335071ms policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.119907373Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" kafka | [2024-01-14 18:50:27,676] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.12070057Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=792.797µs kafka | [2024-01-14 18:50:27,676] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.126254353Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" kafka | [2024-01-14 18:50:27,676] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.127875157Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.620573ms kafka | [2024-01-14 18:50:27,690] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.132026323Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" kafka | [2024-01-14 18:50:27,692] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.168363502Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=36.333588ms kafka | [2024-01-14 18:50:27,692] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.174876787Z level=info msg="Executing migration" id="create correlation v2" kafka | [2024-01-14 18:50:27,692] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.175829748Z level=info msg="Migration successfully executed" id="create correlation v2" duration=949.791µs kafka | [2024-01-14 18:50:27,692] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.189138317Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" kafka | [2024-01-14 18:50:27,704] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.190360827Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.22415ms kafka | [2024-01-14 18:50:27,705] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.201080471Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" kafka | [2024-01-14 18:50:27,705] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.203306194Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.225053ms kafka | [2024-01-14 18:50:27,705] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 grafana | logger=migrator t=2024-01-14T18:49:54.209130716Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" kafka | [2024-01-14 18:50:27,705] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,715] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.210264764Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.134008ms policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,715] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.216371535Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:53 kafka | [2024-01-14 18:50:27,715] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.216781288Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=406.643µs policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,715] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.221259166Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,716] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.222549209Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.289893ms policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,725] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.228212066Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,725] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.238591928Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.381543ms policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,725] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.243948785Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,725] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.244501903Z level=info msg="Migration successfully executed" id="create entity_events table" duration=552.649µs policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,726] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.248438263Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,734] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.249308962Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=870.478µs policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,735] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.25532249Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,735] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.256017143Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,735] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.260867663Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,736] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.261579796Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,743] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.266386334Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,744] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.268029729Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.643705ms policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,745] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.275104522Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,745] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.276820339Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.714736ms policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,745] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.281601946Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 kafka | [2024-01-14 18:50:27,754] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.283401846Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.79964ms policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 grafana | logger=migrator t=2024-01-14T18:49:54.287479701Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" kafka | [2024-01-14 18:50:27,756] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 grafana | logger=migrator t=2024-01-14T18:49:54.288829505Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.349175ms kafka | [2024-01-14 18:50:27,756] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:54 grafana | logger=migrator t=2024-01-14T18:49:54.293819559Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" kafka | [2024-01-14 18:50:27,757] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 grafana | logger=migrator t=2024-01-14T18:49:54.294831593Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.011754ms kafka | [2024-01-14 18:50:27,757] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 grafana | logger=migrator t=2024-01-14T18:49:54.298291777Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-01-14 18:50:27,766] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 grafana | logger=migrator t=2024-01-14T18:49:54.299311491Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.019774ms kafka | [2024-01-14 18:50:27,767] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 grafana | logger=migrator t=2024-01-14T18:49:54.303265361Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-01-14 18:50:27,767] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.304010506Z level=info msg="Migration successfully executed" id="Drop public config table" duration=744.994µs policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 kafka | [2024-01-14 18:50:27,767] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.309027221Z level=info msg="Executing migration" id="Recreate dashboard public config v2" policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 kafka | [2024-01-14 18:50:27,767] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.311016837Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.988306ms policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 kafka | [2024-01-14 18:50:27,777] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.316098714Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 kafka | [2024-01-14 18:50:27,777] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.317170509Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.071935ms policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 kafka | [2024-01-14 18:50:27,778] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.324330706Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1401241849510800u 1 2024-01-14 18:49:55 kafka | [2024-01-14 18:50:27,778] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.326196807Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.878492ms policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:55 kafka | [2024-01-14 18:50:27,778] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:55 grafana | logger=migrator t=2024-01-14T18:49:54.330159728Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-01-14 18:50:27,788] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:55 grafana | logger=migrator t=2024-01-14T18:49:54.331252464Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.087966ms kafka | [2024-01-14 18:50:27,789] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:56 grafana | logger=migrator t=2024-01-14T18:49:54.334887484Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-01-14 18:50:27,790] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:58 grafana | logger=migrator t=2024-01-14T18:49:54.368852104Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=33.95966ms kafka | [2024-01-14 18:50:27,790] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:58 grafana | logger=migrator t=2024-01-14T18:49:54.373421805Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-01-14 18:50:27,791] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:58 grafana | logger=migrator t=2024-01-14T18:49:54.380640052Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.215977ms kafka | [2024-01-14 18:50:27,797] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:58 grafana | logger=migrator t=2024-01-14T18:49:54.385007036Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-01-14 18:50:27,798] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:58 grafana | logger=migrator t=2024-01-14T18:49:54.393839957Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.833011ms kafka | [2024-01-14 18:50:27,798] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.39939359Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-01-14 18:50:27,799] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.399610817Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=217.387µs kafka | [2024-01-14 18:50:27,799] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.406799605Z level=info msg="Executing migration" id="add share column" kafka | [2024-01-14 18:50:27,807] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1401241849510900u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.418386277Z level=info msg="Migration successfully executed" id="add share column" duration=11.587912ms kafka | [2024-01-14 18:50:27,807] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.422009816Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-01-14 18:50:27,808] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.422179542Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=170.496µs kafka | [2024-01-14 18:50:27,808] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.425930725Z level=info msg="Executing migration" id="create file table" kafka | [2024-01-14 18:50:27,808] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 kafka | [2024-01-14 18:50:27,816] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.42666052Z level=info msg="Migration successfully executed" id="create file table" duration=729.354µs policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 kafka | [2024-01-14 18:50:27,817] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.430371972Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 kafka | [2024-01-14 18:50:27,817] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.431443097Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.070945ms policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 kafka | [2024-01-14 18:50:27,818] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.437112535Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 kafka | [2024-01-14 18:50:27,818] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.438875913Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.763458ms policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1401241849511000u 1 2024-01-14 18:49:59 kafka | [2024-01-14 18:50:27,828] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.442388148Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1401241849511100u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.443536806Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.148528ms kafka | [2024-01-14 18:50:27,829] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1401241849511200u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.450755344Z level=info msg="Executing migration" id="file table idx: path key" kafka | [2024-01-14 18:50:27,829] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1401241849511200u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.4527576Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=2.001426ms kafka | [2024-01-14 18:50:27,830] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1401241849511200u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.462967577Z level=info msg="Executing migration" id="set path collation in file table" kafka | [2024-01-14 18:50:27,830] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1401241849511200u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.463031149Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=64.392µs kafka | [2024-01-14 18:50:27,844] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1401241849511300u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.468898253Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" kafka | [2024-01-14 18:50:27,845] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1401241849511300u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.468996696Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=99.063µs kafka | [2024-01-14 18:50:27,845] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1401241849511300u 1 2024-01-14 18:49:59 grafana | logger=migrator t=2024-01-14T18:49:54.472894295Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-01-14 18:50:27,846] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policyadmin: OK @ 1300 grafana | logger=migrator t=2024-01-14T18:49:54.473778994Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=884.189µs kafka | [2024-01-14 18:50:27,846] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.478336834Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-01-14 18:50:27,854] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.478682565Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=346.101µs kafka | [2024-01-14 18:50:27,855] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.484089754Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-01-14 18:50:27,855] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.484817198Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=727.294µs kafka | [2024-01-14 18:50:27,855] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.490058471Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-01-14 18:50:27,855] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.502313805Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.255205ms kafka | [2024-01-14 18:50:27,868] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.508414126Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-01-14 18:50:27,869] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.50853089Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=117.564µs kafka | [2024-01-14 18:50:27,869] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.514130524Z level=info msg="Executing migration" id="Add index for uid in playlist" kafka | [2024-01-14 18:50:27,869] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.515904073Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.769909ms kafka | [2024-01-14 18:50:27,869] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.519668027Z level=info msg="Executing migration" id="update group index for alert rules" kafka | [2024-01-14 18:50:27,878] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.520077141Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=411.034µs kafka | [2024-01-14 18:50:27,879] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.523414201Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-01-14 18:50:27,879] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.523618697Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=204.376µs kafka | [2024-01-14 18:50:27,879] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.52734356Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-01-14 18:50:27,879] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.528091235Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=744.605µs kafka | [2024-01-14 18:50:27,891] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.532903834Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-01-14 18:50:27,892] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.542843552Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.940637ms kafka | [2024-01-14 18:50:27,892] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.553447271Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-01-14 18:50:27,892] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.565677835Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=12.232134ms kafka | [2024-01-14 18:50:27,892] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.569546072Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-01-14 18:50:27,904] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.571175856Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.630204ms kafka | [2024-01-14 18:50:27,907] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.576806362Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-01-14 18:50:27,907] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.685667432Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=108.861661ms kafka | [2024-01-14 18:50:27,907] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.689832629Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-01-14 18:50:27,907] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.69105614Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.223121ms kafka | [2024-01-14 18:50:27,916] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.697475641Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-01-14T18:49:54.699484298Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.007886ms kafka | [2024-01-14 18:50:27,917] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.705815867Z level=info msg="Executing migration" id="add primary key to seed_assigment" kafka | [2024-01-14 18:50:27,917] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.739797507Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=33.980371ms kafka | [2024-01-14 18:50:27,917] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.74291403Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-01-14 18:50:27,917] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.743070345Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=156.515µs kafka | [2024-01-14 18:50:27,925] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.749425185Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-01-14 18:50:27,926] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.749564959Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=139.924µs kafka | [2024-01-14 18:50:27,926] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.754297436Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-01-14 18:50:27,926] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.754627397Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=330.651µs kafka | [2024-01-14 18:50:27,926] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(997CZFJsQRqK8n_DtEBEGQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.758617168Z level=info msg="Executing migration" id="create folder table" kafka | [2024-01-14 18:50:27,937] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.760129368Z level=info msg="Migration successfully executed" id="create folder table" duration=1.51493ms kafka | [2024-01-14 18:50:27,937] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.765160314Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-01-14 18:50:27,937] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.76716713Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.006176ms kafka | [2024-01-14 18:50:27,937] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.771591866Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" kafka | [2024-01-14 18:50:27,938] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.772809336Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.21694ms kafka | [2024-01-14 18:50:27,944] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.778057109Z level=info msg="Executing migration" id="Update folder title length" kafka | [2024-01-14 18:50:27,945] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.778096761Z level=info msg="Migration successfully executed" id="Update folder title length" duration=40.991µs kafka | [2024-01-14 18:50:27,945] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.786986644Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" kafka | [2024-01-14 18:50:27,945] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.78928767Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.299826ms kafka | [2024-01-14 18:50:27,945] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.794124779Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" kafka | [2024-01-14 18:50:27,951] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.795227045Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.105916ms kafka | [2024-01-14 18:50:27,952] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.802051611Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" kafka | [2024-01-14 18:50:27,952] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.804079838Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.027967ms kafka | [2024-01-14 18:50:27,952] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.808450161Z level=info msg="Executing migration" id="create anon_device table" kafka | [2024-01-14 18:50:27,952] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.809895579Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.444238ms kafka | [2024-01-14 18:50:27,958] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.8138451Z level=info msg="Executing migration" id="add unique index anon_device.device_id" kafka | [2024-01-14 18:50:27,958] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.81506762Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.222131ms kafka | [2024-01-14 18:50:27,958] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.819499606Z level=info msg="Executing migration" id="add index anon_device.updated_at" kafka | [2024-01-14 18:50:27,958] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.821604825Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.103959ms kafka | [2024-01-14 18:50:27,958] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.826422544Z level=info msg="Executing migration" id="create signing_key table" kafka | [2024-01-14 18:50:27,965] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.827933944Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.51033ms kafka | [2024-01-14 18:50:27,965] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.833025352Z level=info msg="Executing migration" id="add unique index signing_key.key_id" kafka | [2024-01-14 18:50:27,965] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.834221501Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.194339ms kafka | [2024-01-14 18:50:27,965] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.842731532Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" kafka | [2024-01-14 18:50:27,965] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.844538612Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.80753ms kafka | [2024-01-14 18:50:27,973] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.856503956Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" kafka | [2024-01-14 18:50:27,974] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.857085145Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=582.909µs kafka | [2024-01-14 18:50:27,974] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.863405854Z level=info msg="Executing migration" id="Add folder_uid for dashboard" kafka | [2024-01-14 18:50:27,974] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.873111344Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.70538ms kafka | [2024-01-14 18:50:27,974] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.881253452Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" kafka | [2024-01-14 18:50:27,983] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.882731701Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.480089ms kafka | [2024-01-14 18:50:27,983] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.89117848Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" kafka | [2024-01-14 18:50:27,983] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.892715781Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.53979ms kafka | [2024-01-14 18:50:27,983] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.899511475Z level=info msg="Executing migration" id="create sso_setting table" kafka | [2024-01-14 18:50:27,983] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.901399687Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.887762ms kafka | [2024-01-14 18:50:27,991] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-14T18:49:54.90694484Z level=info msg="Executing migration" id="copy kvstore migration status to each org" kafka | [2024-01-14 18:50:27,992] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-14T18:49:54.907760207Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=816.467µs kafka | [2024-01-14 18:50:27,992] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.912461312Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" kafka | [2024-01-14 18:50:27,992] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-14T18:49:54.912765442Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=305.13µs kafka | [2024-01-14 18:50:27,992] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-14T18:49:54.916372891Z level=info msg="migrations completed" performed=523 skipped=0 duration=4.593128345s kafka | [2024-01-14 18:50:28,003] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=sqlstore t=2024-01-14T18:49:54.927612382Z level=info msg="Created default admin" user=admin kafka | [2024-01-14 18:50:28,004] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=sqlstore t=2024-01-14T18:49:54.927829099Z level=info msg="Created default organization" kafka | [2024-01-14 18:50:28,004] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) grafana | logger=secrets t=2024-01-14T18:49:54.932889406Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 kafka | [2024-01-14 18:50:28,004] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=plugin.store t=2024-01-14T18:49:54.958110367Z level=info msg="Loading plugins..." kafka | [2024-01-14 18:50:28,004] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=local.finder t=2024-01-14T18:49:54.998952195Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled kafka | [2024-01-14 18:50:28,016] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=plugin.store t=2024-01-14T18:49:54.999009236Z level=info msg="Plugins loaded" count=55 duration=40.899949ms kafka | [2024-01-14 18:50:28,017] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=query_data t=2024-01-14T18:49:55.010014256Z level=info msg="Query Service initialization" kafka | [2024-01-14 18:50:28,017] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) grafana | logger=live.push_http t=2024-01-14T18:49:55.017443555Z level=info msg="Live Push Gateway initialization" kafka | [2024-01-14 18:50:28,017] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.migration t=2024-01-14T18:49:55.022859513Z level=info msg=Starting kafka | [2024-01-14 18:50:28,018] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.migration orgID=1 t=2024-01-14T18:49:55.02420908Z level=info msg="Migrating alerts for organisation" kafka | [2024-01-14 18:50:28,027] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=ngalert.migration orgID=1 t=2024-01-14T18:49:55.024746019Z level=info msg="Alerts found to migrate" alerts=0 kafka | [2024-01-14 18:50:28,028] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=ngalert.migration orgID=1 t=2024-01-14T18:49:55.025288058Z level=warn msg="No available receivers" kafka | [2024-01-14 18:50:28,028] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-01-14 18:50:28,028] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-14T18:49:55.029163053Z level=info msg="Completed legacy migration" kafka | [2024-01-14 18:50:28,028] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=infra.usagestats.collector t=2024-01-14T18:49:55.065367573Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 kafka | [2024-01-14 18:50:28,035] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=provisioning.datasources t=2024-01-14T18:49:55.06843033Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz kafka | [2024-01-14 18:50:28,035] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=provisioning.alerting t=2024-01-14T18:49:55.08452494Z level=info msg="starting to provision alerting" kafka | [2024-01-14 18:50:28,035] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) grafana | logger=provisioning.alerting t=2024-01-14T18:49:55.084545251Z level=info msg="finished to provision alerting" kafka | [2024-01-14 18:50:28,035] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=ngalert.state.manager t=2024-01-14T18:49:55.084742237Z level=info msg="Warming state cache for startup" kafka | [2024-01-14 18:50:28,036] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=ngalert.state.manager t=2024-01-14T18:49:55.0850902Z level=info msg="State cache has been initialized" states=0 duration=343.622µs kafka | [2024-01-14 18:50:28,043] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=ngalert.scheduler t=2024-01-14T18:49:55.085942349Z level=info msg="Starting scheduler" tickInterval=10s kafka | [2024-01-14 18:50:28,043] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=grafanaStorageLogger t=2024-01-14T18:49:55.085751933Z level=info msg="Storage starting" kafka | [2024-01-14 18:50:28,043] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) grafana | logger=ngalert.multiorg.alertmanager t=2024-01-14T18:49:55.086333503Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=ticker t=2024-01-14T18:49:55.086380294Z level=info msg=starting first_tick=2024-01-14T18:50:00Z grafana | logger=http.server t=2024-01-14T18:49:55.090978164Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= kafka | [2024-01-14 18:50:28,043] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=plugins.update.checker t=2024-01-14T18:49:55.170403759Z level=info msg="Update check succeeded" duration=85.076311ms kafka | [2024-01-14 18:50:28,044] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=sqlstore.transactions t=2024-01-14T18:49:55.172218473Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" kafka | [2024-01-14 18:50:28,051] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=sqlstore.transactions t=2024-01-14T18:49:55.183812737Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" kafka | [2024-01-14 18:50:28,051] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=sqlstore.transactions t=2024-01-14T18:49:55.230471044Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" kafka | [2024-01-14 18:50:28,052] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) grafana | logger=sqlstore.transactions t=2024-01-14T18:49:55.241733417Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" kafka | [2024-01-14 18:50:28,052] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=grafana.update.checker t=2024-01-14T18:50:01.592174996Z level=info msg="Update check succeeded" duration=6.507013054s kafka | [2024-01-14 18:50:28,052] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=infra.usagestats t=2024-01-14T18:50:57.097643834Z level=info msg="Usage stats are ready to report" kafka | [2024-01-14 18:50:28,059] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-14 18:50:28,059] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-14 18:50:28,059] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-01-14 18:50:28,059] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-14 18:50:28,059] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-14 18:50:28,066] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-14 18:50:28,067] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-14 18:50:28,067] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-01-14 18:50:28,067] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-14 18:50:28,067] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(kjWokfaySVa_GmTM7B9tfA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-14 18:50:28,072] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-01-14 18:50:28,073] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-01-14 18:50:28,079] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,080] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,081] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,081] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,082] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,082] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,085] INFO [Broker id=1] Finished LeaderAndIsr request in 689ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-01-14 18:50:28,086] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,087] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,087] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=kjWokfaySVa_GmTM7B9tfA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=997CZFJsQRqK8n_DtEBEGQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,088] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,089] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,090] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,091] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,092] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-14 18:50:28,097] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,102] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,103] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-14 18:50:28,104] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-14 18:50:28,147] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 9f04366a-9b2f-4312-96e1-33019febbf8b in Empty state. Created a new member id consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3-7bf3d7b2-ab7f-4b21-a745-eed28010975b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,159] INFO [GroupCoordinator 1]: Preparing to rebalance group 9f04366a-9b2f-4312-96e1-33019febbf8b in state PreparingRebalance with old generation 0 (__consumer_offsets-32) (reason: Adding new member consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3-7bf3d7b2-ab7f-4b21-a745-eed28010975b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,195] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-83ddcd15-11f5-4fb2-8b47-2df21c19e82b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,200] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-83ddcd15-11f5-4fb2-8b47-2df21c19e82b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,739] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4f5099d3-3717-42bb-ba40-fb39c13c7c61 in Empty state. Created a new member id consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2-99824490-0a38-49b6-87ba-48ad4efe4059 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:28,744] INFO [GroupCoordinator 1]: Preparing to rebalance group 4f5099d3-3717-42bb-ba40-fb39c13c7c61 in state PreparingRebalance with old generation 0 (__consumer_offsets-2) (reason: Adding new member consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2-99824490-0a38-49b6-87ba-48ad4efe4059 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:31,181] INFO [GroupCoordinator 1]: Stabilized group 9f04366a-9b2f-4312-96e1-33019febbf8b generation 1 (__consumer_offsets-32) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:31,201] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:31,205] INFO [GroupCoordinator 1]: Assignment received from leader consumer-9f04366a-9b2f-4312-96e1-33019febbf8b-3-7bf3d7b2-ab7f-4b21-a745-eed28010975b for group 9f04366a-9b2f-4312-96e1-33019febbf8b for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:31,206] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-83ddcd15-11f5-4fb2-8b47-2df21c19e82b for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:31,745] INFO [GroupCoordinator 1]: Stabilized group 4f5099d3-3717-42bb-ba40-fb39c13c7c61 generation 1 (__consumer_offsets-2) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-14 18:50:31,764] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4f5099d3-3717-42bb-ba40-fb39c13c7c61-2-99824490-0a38-49b6-87ba-48ad4efe4059 for group 4f5099d3-3717-42bb-ba40-fb39c13c7c61 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping grafana ... Stopping kafka ... Stopping policy-api ... Stopping prometheus ... Stopping compose_zookeeper_1 ... Stopping mariadb ... Stopping simulator ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing grafana ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing prometheus ... Removing compose_zookeeper_1 ... Removing mariadb ... Removing simulator ... Removing policy-apex-pdp ... done Removing kafka ... done Removing compose_zookeeper_1 ... done Removing policy-pap ... done Removing grafana ... done Removing simulator ... done Removing mariadb ... done Removing prometheus ... done Removing policy-api ... done Removing policy-db-migrator ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-verify-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.qeeZixE4CM ]] + rsync -av /tmp/tmp.qeeZixE4CM/ /w/workspace/policy-pap-master-project-csit-verify-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 911,172 bytes received 95 bytes 1,822,534.00 bytes/sec total size is 910,626 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-verify-pap/models + exit 1 Build step 'Execute shell' marked build as failure $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2566 killed; [ssh-agent] Stopped. Robot results publisher started... -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins11370099289867771707.sh ---> sysstat.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins13610499800600049728.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-verify-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + grep '^ii' + dpkg -l + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-verify-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-verify-pap/archives/ [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins8763495272052037883.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uGlI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-uGlI/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins3851802253672988905.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-verify-pap@tmp/config10822440838525133302tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins11665369395335556394.sh ---> create-netrc.sh [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins1284602914118957787.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uGlI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-uGlI/bin to PATH [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins9882058754251193578.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-verify-pap] $ /bin/bash /tmp/jenkins1247116446504378077.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uGlI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-uGlI/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-verify-pap] $ /bin/bash -l /tmp/jenkins17965899808050067234.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-verify-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-uGlI from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-uGlI/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-verify-pap/499 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-12237 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 860 25043 0 6262 30850 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:65:b4:86 brd ff:ff:ff:ff:ff:ff inet 10.30.106.241/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 81801sec preferred_lft 81801sec inet6 fe80::f816:3eff:fe65:b486/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:7c:bd:e9:70 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12237) 01/14/24 _x86_64_ (8 CPU) 17:37:07 LINUX RESTART (8 CPU) 17:38:03 tps rtps wtps bread/s bwrtn/s 17:39:01 57.18 32.65 24.53 2866.13 24146.46 17:40:01 7.41 0.00 7.41 0.00 9422.62 17:41:01 7.08 0.00 7.08 0.00 9554.01 17:42:01 7.88 0.00 7.88 0.00 9564.27 17:43:01 7.05 0.00 7.05 0.00 9554.41 17:44:01 7.56 0.02 7.55 0.13 9693.30 17:45:01 7.15 0.00 7.15 0.00 9690.67 17:46:01 7.76 0.00 7.76 0.00 9694.50 17:47:01 7.12 0.00 7.12 0.00 9688.79 17:48:01 7.87 0.00 7.87 0.00 9830.36 17:49:01 7.17 0.00 7.17 0.00 9689.05 17:50:01 7.41 0.00 7.41 0.00 9824.59 17:51:01 7.06 0.00 7.06 0.00 9687.57 17:52:01 12.36 4.07 8.30 32.53 9709.18 17:53:01 7.08 0.00 7.08 0.00 9688.92 17:54:01 3.55 0.00 3.55 0.00 3644.59 17:55:01 0.85 0.00 0.85 0.00 9.87 17:56:01 1.17 0.00 1.17 0.00 15.06 17:57:01 1.00 0.00 1.00 0.00 12.00 17:58:01 1.70 0.00 1.70 0.00 19.33 17:59:01 0.88 0.00 0.88 0.00 10.40 18:00:01 1.53 0.00 1.53 0.00 18.26 18:01:01 0.92 0.00 0.92 0.00 11.20 18:02:01 1.62 0.00 1.62 0.00 18.93 18:03:01 0.86 0.00 0.86 0.00 10.38 18:04:01 1.27 0.00 1.27 0.00 15.86 18:05:01 0.88 0.00 0.88 0.00 10.80 18:06:01 1.23 0.00 1.23 0.00 15.85 18:07:01 1.00 0.00 1.00 0.00 12.26 18:08:01 2.18 0.97 1.22 20.66 18.66 18:09:01 1.63 0.00 1.63 0.00 20.40 18:10:01 1.08 0.00 1.08 0.00 14.93 18:11:01 0.93 0.00 0.93 0.00 12.00 18:12:01 1.32 0.00 1.32 0.00 16.66 18:13:01 1.02 0.00 1.02 0.00 11.60 18:14:01 1.22 0.00 1.22 0.00 15.06 18:15:01 0.92 0.00 0.92 0.00 10.40 18:16:01 1.28 0.00 1.28 0.00 15.73 18:17:01 1.00 0.02 0.98 0.13 12.53 18:18:01 1.57 0.00 1.57 0.00 19.20 18:19:01 0.85 0.00 0.85 0.00 10.40 18:20:01 1.33 0.00 1.33 0.00 16.53 18:21:01 0.87 0.00 0.87 0.00 11.86 18:22:01 1.23 0.00 1.23 0.00 16.00 18:23:01 3.23 2.07 1.17 56.26 14.80 18:24:01 1.08 0.00 1.08 0.00 14.40 18:25:02 1.00 0.00 1.00 0.00 11.86 18:26:01 1.17 0.00 1.17 0.00 15.59 18:27:01 1.08 0.00 1.08 0.00 13.20 18:28:01 1.45 0.00 1.45 0.00 18.00 18:29:01 0.97 0.00 0.97 0.00 11.60 18:30:01 1.18 0.00 1.18 0.00 15.33 18:31:01 0.85 0.00 0.85 0.00 10.66 18:32:01 1.13 0.00 1.13 0.00 15.19 18:33:01 1.18 0.00 1.18 0.00 14.26 18:34:01 1.10 0.00 1.10 0.00 14.53 18:35:01 0.85 0.00 0.85 0.00 10.93 18:36:01 1.08 0.00 1.08 0.00 14.66 18:37:01 0.88 0.00 0.88 0.00 10.66 18:38:01 1.28 0.00 1.28 0.00 16.80 18:39:01 0.85 0.00 0.85 0.00 11.46 18:40:01 1.33 0.00 1.33 0.00 16.93 18:41:01 0.90 0.00 0.90 0.00 11.33 18:42:01 1.23 0.00 1.23 0.00 17.19 18:43:01 1.23 0.00 1.23 0.00 15.86 18:44:01 1.22 0.00 1.22 0.00 15.73 18:45:01 0.90 0.00 0.90 0.00 10.93 18:46:01 243.41 24.01 219.40 817.73 2754.21 18:47:01 89.77 13.91 75.85 979.44 9540.54 18:48:01 113.53 22.90 90.63 2745.14 12227.56 18:49:01 129.71 0.03 129.67 0.27 70417.99 18:50:01 363.20 11.60 351.60 761.88 65665.63 18:51:01 19.71 0.37 19.35 29.06 4742.63 18:52:01 5.13 0.00 5.13 0.00 100.30 18:53:01 51.21 0.72 50.49 42.39 2078.24 Average: 16.58 1.50 15.09 110.15 4410.12 17:38:03 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 17:39:01 30648916 31892856 2290304 6.95 38628 1534024 1374484 4.04 626200 1416368 32 17:40:01 30648096 31892192 2291124 6.96 38708 1534036 1374484 4.04 625824 1416308 12 17:41:01 30646592 31890824 2292628 6.96 38788 1534036 1374484 4.04 626732 1416316 12 17:42:01 30646356 31890660 2292864 6.96 38892 1534012 1374484 4.04 627460 1416324 12 17:43:01 30644792 31889168 2294428 6.97 38972 1534048 1374484 4.04 629152 1416340 28 17:44:01 30643248 31887860 2295972 6.97 39060 1534188 1374484 4.04 630656 1416444 160 17:45:01 30642248 31886960 2296972 6.97 39140 1534192 1374484 4.04 631532 1416488 160 17:46:01 30636640 31881420 2302580 6.99 39220 1534196 1374484 4.04 636980 1416496 4 17:47:01 30634892 31879800 2304328 7.00 39300 1534200 1374484 4.04 638388 1416508 180 17:48:01 30633760 31878752 2305460 7.00 39388 1534204 1374484 4.04 639540 1416468 164 17:49:01 30632996 31878060 2306224 7.00 39468 1534208 1374484 4.04 640720 1416476 164 17:50:01 30623480 31868668 2315740 7.03 39548 1534212 1374484 4.04 650112 1416488 28 17:51:01 30622208 31867468 2317012 7.03 39628 1534220 1374484 4.04 651368 1416496 192 17:52:01 30619148 31865580 2320072 7.04 40688 1534228 1380068 4.06 653572 1416488 140 17:53:01 30618056 31864556 2321164 7.05 40768 1534232 1380068 4.06 654688 1416500 204 17:54:01 30617336 31863896 2321884 7.05 40824 1534236 1380068 4.06 655760 1416508 164 17:55:01 30616088 31862708 2323132 7.05 40856 1534236 1380068 4.06 656940 1416512 176 17:56:01 30615664 31862300 2323556 7.05 40888 1534240 1380068 4.06 657748 1416516 8 17:57:01 30614136 31860828 2325084 7.06 40936 1534236 1380068 4.06 658944 1416524 24 17:58:01 30613112 31859868 2326108 7.06 40992 1534248 1380068 4.06 660180 1416532 172 17:59:01 30612064 31858876 2327156 7.07 41016 1534252 1380068 4.06 661412 1416536 168 18:00:01 30610308 31857196 2328912 7.07 41056 1534260 1380068 4.06 663496 1416544 172 18:01:01 30609432 31856336 2329788 7.07 41088 1534264 1380068 4.06 664520 1416528 8 18:02:01 30608276 31855248 2330944 7.08 41144 1534268 1380068 4.06 665548 1416536 32 18:03:01 30607152 31854176 2332068 7.08 41168 1534272 1380068 4.06 666768 1416536 188 18:04:01 30604764 31851832 2334456 7.09 41208 1534276 1380068 4.06 668952 1416544 12 18:05:01 30603056 31850128 2336164 7.09 41232 1534280 1380068 4.06 670704 1416548 8 18:06:01 30601276 31848428 2337944 7.10 41280 1534284 1380068 4.06 671908 1416548 172 18:07:01 30599024 31846236 2340196 7.10 41328 1534288 1380068 4.06 674580 1416556 60 18:08:01 30595948 31843988 2343272 7.11 41424 1534904 1398088 4.11 676064 1416956 392 18:09:01 30595188 31843240 2344032 7.12 41464 1534912 1398088 4.11 677040 1416964 168 18:10:01 30593064 31841212 2346156 7.12 41512 1534916 1398088 4.11 679808 1416972 220 18:11:01 30590324 31838480 2348896 7.13 41552 1534920 1398088 4.11 682368 1416976 188 18:12:01 30589464 31837688 2349756 7.13 41600 1534924 1398088 4.11 683272 1416980 40 18:13:01 30584264 31832548 2354956 7.15 41640 1534928 1398088 4.11 689120 1416988 172 18:14:01 30583428 31831784 2355792 7.15 41688 1534932 1398088 4.11 690112 1416992 4 18:15:01 30583384 31831748 2355836 7.15 41728 1534936 1398088 4.11 690000 1416996 184 18:16:01 30582996 31831420 2356224 7.15 41776 1534940 1398088 4.11 690172 1416996 164 18:17:01 30582000 31830468 2357220 7.16 41820 1534924 1398088 4.11 690152 1417000 8 18:18:01 30581720 31830424 2357500 7.16 41860 1535080 1398088 4.11 690240 1417120 24 18:19:01 30581908 31830660 2357312 7.16 41884 1535084 1398088 4.11 690256 1417124 24 18:20:01 30581900 31830728 2357320 7.16 41940 1535080 1398088 4.11 690348 1417128 180 18:21:01 30581808 31830672 2357412 7.16 41980 1535092 1398088 4.11 690368 1417132 28 18:22:01 30581520 31830440 2357700 7.16 42036 1535076 1398088 4.11 690404 1417132 12 18:23:01 30561072 31812308 2378148 7.22 42084 1536780 1415756 4.17 709976 1417792 12 18:24:01 30561164 31812436 2378056 7.22 42124 1536784 1415756 4.17 709700 1417796 28 18:25:02 30560776 31812100 2378444 7.22 42164 1536788 1415756 4.17 709712 1417800 8 18:26:01 30560892 31812268 2378328 7.22 42204 1536796 1415756 4.17 709980 1417808 40 18:27:01 30561048 31812448 2378172 7.22 42244 1536780 1415756 4.17 709776 1417816 208 18:28:01 30560580 31812084 2378640 7.22 42308 1536808 1415756 4.17 709960 1417820 28 18:29:01 30560572 31812124 2378648 7.22 42348 1536812 1415756 4.17 710168 1417824 24 18:30:01 30560384 31811988 2378836 7.22 42388 1536808 1415756 4.17 710076 1417828 4 18:31:01 30560484 31812104 2378736 7.22 42420 1536796 1415756 4.17 710104 1417832 228 18:32:01 30560524 31812204 2378696 7.22 42460 1536816 1415756 4.17 710340 1417840 24 18:33:01 30560704 31812352 2378516 7.22 42504 1536808 1415756 4.17 710160 1417844 36 18:34:01 30560108 31811844 2379112 7.22 42536 1536832 1415756 4.17 710260 1417848 24 18:35:01 30560416 31812180 2378804 7.22 42560 1536820 1415756 4.17 710212 1417848 188 18:36:01 30559724 31811532 2379496 7.22 42592 1536840 1415756 4.17 710480 1417848 200 18:37:01 30559880 31811740 2379340 7.22 42608 1536844 1399172 4.12 710352 1417856 36 18:38:01 30559864 31811780 2379356 7.22 42648 1536848 1399172 4.12 710488 1417864 12 18:39:01 30559840 31811760 2379380 7.22 42680 1536828 1399172 4.12 710508 1417868 220 18:40:01 30559880 31811868 2379340 7.22 42720 1536856 1399172 4.12 710480 1417868 160 18:41:01 30559284 31811312 2379936 7.23 42760 1536844 1399172 4.12 710596 1417876 12 18:42:01 30559100 31811212 2380120 7.23 42808 1536868 1399172 4.12 710700 1417884 208 18:43:01 30558656 31810800 2380564 7.23 42876 1536872 1399172 4.12 711060 1417892 40 18:44:01 30559324 31811652 2379896 7.23 42916 1537004 1420212 4.18 710652 1418028 148 18:45:01 30559192 31811588 2380028 7.23 42956 1536992 1420212 4.18 710740 1418020 124 18:46:01 30377112 31722220 2562108 7.78 51236 1616348 1484144 4.37 822388 1473876 43048 18:47:01 29931356 31639060 3007864 9.13 80456 1929612 1546316 4.55 949628 1750820 154328 18:48:01 27877184 31627176 5062036 15.37 113648 3871728 1650620 4.86 1048636 3594916 1760972 18:49:01 27185932 31646032 5753288 17.47 131776 4514756 1464228 4.31 1048976 4237096 341116 18:50:01 24735508 30715024 8203712 24.91 156880 5932436 7880832 23.19 2094916 5514404 180 18:51:01 23343872 29459320 9595348 29.13 158360 6063644 9002584 26.49 3424640 5561176 536 18:52:01 23327188 29443428 9612032 29.18 158456 6064216 8969528 26.39 3441624 5561132 220 18:53:01 25682660 31612192 7256560 22.03 159452 5892996 1594808 4.69 1314612 5395368 3616 Average: 30162751 31750087 2776469 8.43 50391 1849993 1692920 4.98 795227 1707106 30809 17:38:03 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 17:39:01 lo 0.48 0.48 0.05 0.05 0.00 0.00 0.00 0.00 17:39:01 ens3 216.81 156.65 530.80 31.09 0.00 0.00 0.00 0.00 17:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:40:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:40:01 ens3 0.77 0.50 0.23 0.57 0.00 0.00 0.00 0.00 17:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:41:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:41:01 ens3 0.27 0.15 0.06 0.17 0.00 0.00 0.00 0.00 17:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:42:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:42:01 ens3 0.40 0.23 0.07 0.17 0.00 0.00 0.00 0.00 17:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:43:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:43:01 ens3 0.65 0.42 0.31 0.49 0.00 0.00 0.00 0.00 17:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:44:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:44:01 ens3 0.40 0.20 0.14 0.07 0.00 0.00 0.00 0.00 17:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:45:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:45:01 ens3 0.40 0.17 0.07 0.21 0.00 0.00 0.00 0.00 17:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:46:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:46:01 ens3 4.03 1.83 2.63 0.75 0.00 0.00 0.00 0.00 17:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:47:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:47:01 ens3 0.32 0.10 0.06 0.07 0.00 0.00 0.00 0.00 17:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:48:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:48:01 ens3 0.27 0.17 0.06 0.01 0.00 0.00 0.00 0.00 17:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:49:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:49:01 ens3 0.30 0.20 0.13 0.20 0.00 0.00 0.00 0.00 17:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:50:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:50:01 ens3 0.97 1.02 0.47 1.50 0.00 0.00 0.00 0.00 17:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:51:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:51:01 ens3 0.13 0.12 0.05 0.16 0.00 0.00 0.00 0.00 17:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:52:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:52:01 ens3 0.20 0.18 0.06 0.01 0.00 0.00 0.00 0.00 17:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:53:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:53:01 ens3 0.23 0.15 0.06 0.17 0.00 0.00 0.00 0.00 17:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:54:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:54:01 ens3 0.48 0.55 0.16 0.59 0.00 0.00 0.00 0.00 17:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:55:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:55:01 ens3 0.35 0.18 0.07 0.22 0.00 0.00 0.00 0.00 17:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:56:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:56:01 ens3 0.37 0.18 0.07 0.01 0.00 0.00 0.00 0.00 17:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:57:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:57:01 ens3 0.22 0.12 0.06 0.16 0.00 0.00 0.00 0.00 17:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:58:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 17:58:01 ens3 0.25 0.23 0.06 0.17 0.00 0.00 0.00 0.00 17:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:59:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17:59:01 ens3 0.28 0.30 0.13 0.39 0.00 0.00 0.00 0.00 17:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:00:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:00:01 ens3 0.45 0.27 0.12 0.02 0.00 0.00 0.00 0.00 18:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:01:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:01:01 ens3 0.15 0.20 0.05 0.17 0.00 0.00 0.00 0.00 18:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:02:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:02:01 ens3 0.33 0.22 0.07 0.17 0.00 0.00 0.00 0.00 18:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:03:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:03:01 ens3 0.17 0.20 0.05 0.31 0.00 0.00 0.00 0.00 18:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:04:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:04:01 ens3 1.38 0.63 0.58 0.50 0.00 0.00 0.00 0.00 18:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:05:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:05:01 ens3 0.50 0.42 0.31 0.38 0.00 0.00 0.00 0.00 18:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:06:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:06:01 ens3 0.27 0.20 0.06 0.18 0.00 0.00 0.00 0.00 18:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:07:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:07:01 ens3 1.03 0.55 0.40 0.63 0.00 0.00 0.00 0.00 18:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:08:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:08:01 ens3 0.25 0.15 0.06 0.07 0.00 0.00 0.00 0.00 18:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:09:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:09:01 ens3 0.30 0.28 0.13 0.40 0.00 0.00 0.00 0.00 18:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:10:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:10:01 ens3 0.53 0.40 0.17 0.34 0.00 0.00 0.00 0.00 18:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:11:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:11:01 ens3 0.95 0.52 0.40 0.66 0.00 0.00 0.00 0.00 18:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:12:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:12:01 ens3 0.27 0.23 0.06 0.19 0.00 0.00 0.00 0.00 18:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:13:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:13:01 ens3 4.92 5.35 0.36 6.77 0.00 0.00 0.00 0.00 18:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:14:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:14:01 ens3 3.18 3.27 0.32 4.79 0.00 0.00 0.00 0.00 18:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:15:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:15:01 ens3 0.30 0.20 0.06 0.44 0.00 0.00 0.00 0.00 18:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:16:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:16:01 ens3 0.38 0.28 0.18 0.18 0.00 0.00 0.00 0.00 18:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:17:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:17:01 ens3 0.32 0.20 0.06 0.17 0.00 0.00 0.00 0.00 18:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:18:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:18:01 ens3 0.32 0.27 0.07 0.33 0.00 0.00 0.00 0.00 18:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:19:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:19:01 ens3 0.32 0.22 0.13 0.20 0.00 0.00 0.00 0.00 18:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:20:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:20:01 ens3 0.68 0.50 0.27 0.72 0.00 0.00 0.00 0.00 18:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:21:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:21:01 ens3 0.15 0.13 0.05 0.17 0.00 0.00 0.00 0.00 18:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:22:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:22:01 ens3 0.20 0.22 0.06 0.17 0.00 0.00 0.00 0.00 18:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:23:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:23:01 ens3 9.83 7.92 6.44 7.85 0.00 0.00 0.00 0.00 18:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:24:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:24:01 ens3 0.33 0.25 0.13 0.26 0.00 0.00 0.00 0.00 18:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:25:02 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:25:02 ens3 0.23 0.20 0.06 0.39 0.00 0.00 0.00 0.00 18:25:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:26:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:26:01 ens3 0.27 0.24 0.06 0.18 0.00 0.00 0.00 0.00 18:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:27:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:27:01 ens3 0.25 0.18 0.06 0.18 0.00 0.00 0.00 0.00 18:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:28:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:28:01 ens3 0.25 0.18 0.06 0.01 0.00 0.00 0.00 0.00 18:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:29:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:29:01 ens3 0.32 0.27 0.13 0.37 0.00 0.00 0.00 0.00 18:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:30:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:30:01 ens3 0.22 0.18 0.06 0.05 0.00 0.00 0.00 0.00 18:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:31:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:31:01 ens3 0.17 0.20 0.05 0.34 0.00 0.00 0.00 0.00 18:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:32:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:32:01 ens3 0.20 0.18 0.06 0.01 0.00 0.00 0.00 0.00 18:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:33:01 ens3 0.17 0.20 0.05 0.34 0.00 0.00 0.00 0.00 18:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:34:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:34:01 ens3 0.32 0.20 0.13 0.04 0.00 0.00 0.00 0.00 18:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:35:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:35:01 ens3 0.12 0.12 0.05 0.36 0.00 0.00 0.00 0.00 18:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:36:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:36:01 ens3 0.20 0.13 0.06 0.01 0.00 0.00 0.00 0.00 18:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:37:01 ens3 0.13 0.12 0.05 0.33 0.00 0.00 0.00 0.00 18:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:38:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:38:01 ens3 1.50 0.65 0.55 0.39 0.00 0.00 0.00 0.00 18:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:39:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:39:01 ens3 0.25 0.17 0.13 0.13 0.00 0.00 0.00 0.00 18:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:40:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:40:01 ens3 0.33 0.28 0.11 0.53 0.00 0.00 0.00 0.00 18:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:41:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:41:01 ens3 0.92 0.38 0.40 0.66 0.00 0.00 0.00 0.00 18:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:42:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:42:01 ens3 0.42 0.17 0.39 0.04 0.00 0.00 0.00 0.00 18:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:43:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:43:01 ens3 0.62 0.53 0.31 0.50 0.00 0.00 0.00 0.00 18:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:44:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 18:44:01 ens3 0.30 0.20 0.13 0.08 0.00 0.00 0.00 0.00 18:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:45:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:45:01 ens3 0.13 0.10 0.05 0.20 0.00 0.00 0.00 0.00 18:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:46:01 lo 0.80 0.80 0.07 0.07 0.00 0.00 0.00 0.00 18:46:01 ens3 94.67 72.02 333.16 41.06 0.00 0.00 0.00 0.00 18:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:47:01 lo 1.00 1.00 0.11 0.11 0.00 0.00 0.00 0.00 18:47:01 ens3 65.37 45.78 936.68 7.91 0.00 0.00 0.00 0.00 18:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:48:01 lo 8.13 8.13 0.75 0.75 0.00 0.00 0.00 0.00 18:48:01 br-c262f339c28f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:48:01 ens3 359.56 231.18 10671.99 20.28 0.00 0.00 0.00 0.00 18:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:49:01 lo 2.00 2.00 0.24 0.24 0.00 0.00 0.00 0.00 18:49:01 br-c262f339c28f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:49:01 ens3 427.51 232.17 7330.80 18.78 0.00 0.00 0.00 0.00 18:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 18:50:01 lo 4.20 4.20 0.39 0.39 0.00 0.00 0.00 0.00 18:50:01 vethe7375d0 0.42 0.55 0.03 0.03 0.00 0.00 0.00 0.00 18:50:01 veth2ff6681 1.87 2.30 0.18 0.21 0.00 0.00 0.00 0.00 18:50:01 br-c262f339c28f 0.53 0.45 0.04 0.21 0.00 0.00 0.00 0.00 18:51:01 lo 4.83 4.83 3.50 3.50 0.00 0.00 0.00 0.00 18:51:01 vethe7375d0 3.22 3.97 0.64 0.42 0.00 0.00 0.00 0.00 18:51:01 veth2ff6681 15.00 12.95 1.93 1.95 0.00 0.00 0.00 0.00 18:51:01 br-c262f339c28f 1.90 2.18 1.76 1.78 0.00 0.00 0.00 0.00 18:52:01 lo 4.72 4.72 0.35 0.35 0.00 0.00 0.00 0.00 18:52:01 vethe7375d0 3.20 4.67 0.66 0.36 0.00 0.00 0.00 0.00 18:52:01 veth2ff6681 13.93 9.37 1.06 1.34 0.00 0.00 0.00 0.00 18:52:01 br-c262f339c28f 0.87 0.87 0.11 0.08 0.00 0.00 0.00 0.00 18:53:01 lo 5.95 5.95 0.51 0.51 0.00 0.00 0.00 0.00 18:53:01 ens3 1761.69 1056.02 34837.48 205.91 0.00 0.00 0.00 0.00 18:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 0.52 0.52 0.09 0.09 0.00 0.00 0.00 0.00 Average: ens3 23.26 13.93 463.86 2.72 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12237) 01/14/24 _x86_64_ (8 CPU) 17:37:07 LINUX RESTART (8 CPU) 17:38:03 CPU %user %nice %system %iowait %steal %idle 17:39:01 all 2.15 0.00 0.35 12.26 0.02 85.22 17:39:01 0 1.99 0.00 0.40 3.77 0.02 93.83 17:39:01 1 1.07 0.00 0.24 1.05 0.02 97.62 17:39:01 2 2.69 0.00 0.27 15.89 0.02 81.13 17:39:01 3 1.30 0.00 0.69 0.02 0.02 97.98 17:39:01 4 1.88 0.00 0.38 16.76 0.05 80.93 17:39:01 5 1.59 0.00 0.35 30.56 0.02 67.49 17:39:01 6 3.28 0.00 0.31 2.26 0.02 94.13 17:39:01 7 3.41 0.00 0.22 27.76 0.02 68.59 17:40:01 all 0.18 0.00 0.02 0.99 0.01 98.80 17:40:01 0 0.10 0.00 0.03 0.00 0.03 99.83 17:40:01 1 0.08 0.00 0.02 0.00 0.00 99.90 17:40:01 2 1.13 0.00 0.00 7.80 0.00 91.07 17:40:01 3 0.02 0.00 0.02 0.00 0.00 99.97 17:40:01 4 0.00 0.00 0.00 0.00 0.00 100.00 17:40:01 5 0.00 0.00 0.00 0.10 0.00 99.90 17:40:01 6 0.02 0.00 0.00 0.00 0.00 99.98 17:40:01 7 0.08 0.00 0.02 0.00 0.02 99.88 17:41:01 all 0.08 0.00 0.00 0.95 0.01 98.96 17:41:01 0 0.03 0.00 0.02 0.00 0.02 99.93 17:41:01 1 0.02 0.00 0.02 0.00 0.00 99.97 17:41:01 2 0.55 0.00 0.00 7.59 0.02 91.84 17:41:01 3 0.00 0.00 0.00 0.00 0.00 100.00 17:41:01 4 0.02 0.00 0.00 0.00 0.00 99.98 17:41:01 5 0.02 0.00 0.00 0.00 0.00 99.98 17:41:01 6 0.00 0.00 0.00 0.00 0.00 100.00 17:41:01 7 0.00 0.00 0.00 0.00 0.00 100.00 17:42:01 all 0.09 0.00 0.01 0.84 0.00 99.05 17:42:01 0 0.03 0.00 0.03 0.00 0.02 99.92 17:42:01 1 0.03 0.00 0.00 0.00 0.00 99.97 17:42:01 2 0.61 0.00 0.05 6.59 0.02 92.73 17:42:01 3 0.02 0.00 0.00 0.00 0.00 99.98 17:42:01 4 0.02 0.00 0.00 0.00 0.00 99.98 17:42:01 5 0.00 0.00 0.02 0.07 0.00 99.92 17:42:01 6 0.02 0.00 0.02 0.05 0.00 99.92 17:42:01 7 0.03 0.00 0.00 0.00 0.02 99.95 17:43:01 all 0.06 0.00 0.01 0.82 0.01 99.10 17:43:01 0 0.03 0.00 0.03 0.00 0.02 99.92 17:43:01 1 0.07 0.00 0.02 0.00 0.02 99.90 17:43:01 2 0.40 0.00 0.02 6.53 0.00 93.06 17:43:01 3 0.00 0.00 0.02 0.00 0.00 99.98 17:43:01 4 0.00 0.00 0.00 0.00 0.00 100.00 17:43:01 5 0.00 0.00 0.00 0.00 0.02 99.98 17:43:01 6 0.00 0.00 0.00 0.00 0.00 100.00 17:43:01 7 0.02 0.00 0.02 0.00 0.02 99.95 17:44:01 all 0.02 0.00 0.01 0.69 0.01 99.27 17:44:01 0 0.03 0.00 0.02 0.00 0.02 99.93 17:44:01 1 0.03 0.00 0.02 0.00 0.00 99.95 17:44:01 2 0.00 0.00 0.00 5.54 0.00 94.46 17:44:01 3 0.02 0.00 0.00 0.00 0.00 99.98 17:44:01 4 0.00 0.00 0.00 0.00 0.00 100.00 17:44:01 5 0.00 0.00 0.02 0.00 0.00 99.98 17:44:01 6 0.00 0.00 0.03 0.00 0.02 99.95 17:44:01 7 0.03 0.00 0.00 0.00 0.00 99.97 17:45:01 all 0.08 0.00 0.00 0.67 0.01 99.24 17:45:01 0 0.03 0.00 0.02 0.00 0.02 99.93 17:45:01 1 0.00 0.00 0.00 0.00 0.00 100.00 17:45:01 2 0.57 0.00 0.00 5.36 0.02 94.05 17:45:01 3 0.00 0.00 0.00 0.00 0.00 100.00 17:45:01 4 0.00 0.00 0.02 0.00 0.00 99.98 17:45:01 5 0.00 0.00 0.00 0.00 0.00 100.00 17:45:01 6 0.03 0.00 0.02 0.00 0.00 99.95 17:45:01 7 0.00 0.00 0.00 0.00 0.02 99.98 17:46:01 all 0.06 0.00 0.01 0.65 0.01 99.28 17:46:01 0 0.05 0.00 0.03 0.08 0.02 99.82 17:46:01 1 0.03 0.00 0.00 0.00 0.00 99.97 17:46:01 2 0.22 0.00 0.00 5.10 0.02 94.67 17:46:01 3 0.02 0.00 0.00 0.00 0.00 99.98 17:46:01 4 0.02 0.00 0.00 0.00 0.02 99.97 17:46:01 5 0.02 0.00 0.02 0.00 0.00 99.97 17:46:01 6 0.05 0.00 0.02 0.00 0.00 99.93 17:46:01 7 0.07 0.00 0.00 0.00 0.00 99.93 17:47:01 all 0.26 0.00 0.01 0.56 0.00 99.17 17:47:01 0 0.02 0.00 0.02 0.00 0.03 99.93 17:47:01 1 0.03 0.00 0.02 0.02 0.00 99.93 17:47:01 2 1.97 0.00 0.00 4.37 0.00 93.66 17:47:01 3 0.00 0.00 0.00 0.00 0.00 100.00 17:47:01 4 0.00 0.00 0.02 0.00 0.00 99.98 17:47:01 5 0.00 0.00 0.00 0.00 0.02 99.98 17:47:01 6 0.02 0.00 0.00 0.00 0.00 99.98 17:47:01 7 0.02 0.00 0.02 0.00 0.02 99.95 17:48:01 all 0.04 0.00 0.01 0.53 0.00 99.41 17:48:01 0 0.02 0.00 0.02 0.02 0.00 99.95 17:48:01 1 0.03 0.00 0.00 0.00 0.00 99.97 17:48:01 2 0.22 0.00 0.05 4.24 0.00 95.50 17:48:01 3 0.00 0.00 0.00 0.00 0.00 100.00 17:48:01 4 0.02 0.00 0.00 0.00 0.00 99.98 17:48:01 5 0.02 0.00 0.00 0.00 0.00 99.98 17:48:01 6 0.02 0.00 0.02 0.00 0.00 99.97 17:48:01 7 0.03 0.00 0.00 0.00 0.00 99.97 17:49:01 all 0.01 0.00 0.01 0.54 0.01 99.43 17:49:01 0 0.02 0.00 0.05 0.00 0.03 99.90 17:49:01 1 0.03 0.00 0.00 0.00 0.00 99.97 17:49:01 2 0.00 0.00 0.00 4.35 0.02 95.63 17:49:01 3 0.00 0.00 0.02 0.00 0.00 99.98 17:49:01 4 0.00 0.00 0.00 0.00 0.00 100.00 17:49:01 5 0.00 0.00 0.00 0.00 0.00 100.00 17:49:01 6 0.02 0.00 0.02 0.00 0.00 99.97 17:49:01 7 0.02 0.00 0.00 0.00 0.02 99.97 17:49:01 CPU %user %nice %system %iowait %steal %idle 17:50:01 all 0.06 0.00 0.01 0.50 0.00 99.43 17:50:01 0 0.02 0.00 0.00 0.00 0.00 99.98 17:50:01 1 0.02 0.00 0.00 0.00 0.02 99.97 17:50:01 2 0.15 0.00 0.03 3.98 0.02 95.82 17:50:01 3 0.02 0.00 0.00 0.00 0.00 99.98 17:50:01 4 0.18 0.00 0.00 0.00 0.00 99.82 17:50:01 5 0.02 0.00 0.02 0.00 0.00 99.97 17:50:01 6 0.08 0.00 0.03 0.00 0.00 99.88 17:50:01 7 0.05 0.00 0.00 0.00 0.00 99.95 17:51:01 all 0.02 0.00 0.01 0.55 0.00 99.42 17:51:01 0 0.02 0.00 0.03 0.00 0.02 99.93 17:51:01 1 0.05 0.00 0.00 0.00 0.00 99.95 17:51:01 2 0.03 0.00 0.02 4.44 0.00 95.51 17:51:01 3 0.00 0.00 0.00 0.00 0.00 100.00 17:51:01 4 0.02 0.00 0.02 0.00 0.00 99.97 17:51:01 5 0.00 0.00 0.00 0.00 0.00 100.00 17:51:01 6 0.00 0.00 0.00 0.00 0.00 100.00 17:51:01 7 0.02 0.00 0.00 0.00 0.02 99.97 17:52:01 all 0.01 0.00 0.02 0.51 0.01 99.45 17:52:01 0 0.02 0.00 0.03 0.00 0.02 99.93 17:52:01 1 0.00 0.00 0.02 0.00 0.00 99.98 17:52:01 2 0.02 0.00 0.00 4.07 0.02 95.90 17:52:01 3 0.00 0.00 0.02 0.00 0.00 99.98 17:52:01 4 0.02 0.00 0.00 0.00 0.00 99.98 17:52:01 5 0.02 0.00 0.03 0.05 0.02 99.88 17:52:01 6 0.02 0.00 0.03 0.00 0.00 99.95 17:52:01 7 0.00 0.00 0.03 0.00 0.00 99.97 17:53:01 all 0.01 0.00 0.01 0.59 0.00 99.39 17:53:01 0 0.05 0.00 0.02 0.00 0.02 99.92 17:53:01 1 0.03 0.00 0.00 0.00 0.00 99.97 17:53:01 2 0.00 0.00 0.00 4.72 0.00 95.28 17:53:01 3 0.00 0.00 0.02 0.00 0.00 99.98 17:53:01 4 0.00 0.00 0.02 0.00 0.00 99.98 17:53:01 5 0.00 0.00 0.00 0.00 0.00 100.00 17:53:01 6 0.00 0.00 0.02 0.00 0.00 99.98 17:53:01 7 0.02 0.00 0.00 0.00 0.02 99.97 17:54:01 all 0.17 0.00 0.01 0.23 0.00 99.58 17:54:01 0 0.03 0.00 0.03 0.00 0.02 99.92 17:54:01 1 0.02 0.00 0.00 0.00 0.02 99.97 17:54:01 2 1.26 0.00 0.03 1.85 0.00 96.86 17:54:01 3 0.00 0.00 0.00 0.02 0.00 99.98 17:54:01 4 0.00 0.00 0.02 0.00 0.00 99.98 17:54:01 5 0.00 0.00 0.00 0.00 0.00 100.00 17:54:01 6 0.02 0.00 0.00 0.00 0.00 99.98 17:54:01 7 0.02 0.00 0.00 0.00 0.00 99.98 17:55:01 all 0.25 0.00 0.00 0.00 0.00 99.74 17:55:01 0 0.00 0.00 0.02 0.00 0.02 99.97 17:55:01 1 0.05 0.00 0.00 0.00 0.00 99.95 17:55:01 2 1.92 0.00 0.00 0.02 0.02 98.05 17:55:01 3 0.02 0.00 0.00 0.02 0.00 99.97 17:55:01 4 0.02 0.00 0.00 0.00 0.00 99.98 17:55:01 5 0.02 0.00 0.00 0.00 0.00 99.98 17:55:01 6 0.00 0.00 0.02 0.00 0.00 99.98 17:55:01 7 0.00 0.00 0.00 0.00 0.00 100.00 17:56:01 all 0.13 0.00 0.01 0.00 0.00 99.85 17:56:01 0 0.96 0.00 0.05 0.00 0.02 98.97 17:56:01 1 0.00 0.00 0.00 0.00 0.00 100.00 17:56:01 2 0.02 0.00 0.00 0.03 0.00 99.95 17:56:01 3 0.00 0.00 0.00 0.00 0.00 100.00 17:56:01 4 0.00 0.00 0.00 0.00 0.00 100.00 17:56:01 5 0.00 0.00 0.00 0.00 0.00 100.00 17:56:01 6 0.00 0.00 0.02 0.00 0.02 99.97 17:56:01 7 0.02 0.00 0.00 0.00 0.00 99.98 17:57:01 all 0.22 0.00 0.01 0.00 0.00 99.76 17:57:01 0 1.65 0.00 0.03 0.00 0.03 98.29 17:57:01 1 0.03 0.00 0.02 0.00 0.00 99.95 17:57:01 2 0.00 0.00 0.00 0.02 0.00 99.98 17:57:01 3 0.00 0.00 0.00 0.02 0.00 99.98 17:57:01 4 0.03 0.00 0.02 0.00 0.00 99.95 17:57:01 5 0.00 0.00 0.00 0.00 0.02 99.98 17:57:01 6 0.00 0.00 0.00 0.00 0.00 100.00 17:57:01 7 0.02 0.00 0.00 0.00 0.00 99.98 17:58:01 all 0.01 0.00 0.01 0.01 0.00 99.97 17:58:01 0 0.03 0.00 0.02 0.00 0.02 99.93 17:58:01 1 0.02 0.00 0.00 0.00 0.00 99.98 17:58:01 2 0.00 0.00 0.00 0.03 0.00 99.97 17:58:01 3 0.02 0.00 0.00 0.00 0.00 99.98 17:58:01 4 0.02 0.00 0.00 0.00 0.00 99.98 17:58:01 5 0.00 0.00 0.00 0.00 0.00 100.00 17:58:01 6 0.00 0.00 0.00 0.00 0.00 100.00 17:58:01 7 0.02 0.00 0.02 0.00 0.00 99.97 17:59:01 all 0.01 0.00 0.00 0.00 0.00 99.98 17:59:01 0 0.02 0.00 0.02 0.02 0.02 99.93 17:59:01 1 0.02 0.00 0.00 0.00 0.00 99.98 17:59:01 2 0.02 0.00 0.00 0.00 0.00 99.98 17:59:01 3 0.00 0.00 0.00 0.02 0.02 99.97 17:59:01 4 0.02 0.00 0.00 0.00 0.00 99.98 17:59:01 5 0.02 0.00 0.00 0.00 0.00 99.98 17:59:01 6 0.03 0.00 0.02 0.00 0.00 99.95 17:59:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:00:01 all 0.01 0.00 0.00 0.00 0.00 99.97 18:00:01 0 0.05 0.00 0.00 0.02 0.00 99.93 18:00:01 1 0.02 0.00 0.00 0.00 0.02 99.97 18:00:01 2 0.02 0.00 0.00 0.00 0.02 99.97 18:00:01 3 0.00 0.00 0.00 0.02 0.00 99.98 18:00:01 4 0.03 0.00 0.02 0.00 0.00 99.95 18:00:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:00:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:00:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:00:01 CPU %user %nice %system %iowait %steal %idle 18:01:01 all 0.07 0.00 0.00 0.00 0.01 99.92 18:01:01 0 0.40 0.00 0.00 0.02 0.02 99.57 18:01:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:01:01 2 0.03 0.00 0.02 0.00 0.02 99.93 18:01:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:01:01 4 0.02 0.00 0.00 0.00 0.00 99.98 18:01:01 5 0.00 0.00 0.00 0.00 0.02 99.98 18:01:01 6 0.03 0.00 0.02 0.00 0.00 99.95 18:01:01 7 0.02 0.00 0.02 0.00 0.00 99.97 18:02:01 all 0.04 0.00 0.01 0.00 0.00 99.94 18:02:01 0 0.27 0.00 0.02 0.02 0.00 99.70 18:02:01 1 0.03 0.00 0.00 0.00 0.00 99.97 18:02:01 2 0.02 0.00 0.03 0.02 0.02 99.92 18:02:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:02:01 4 0.02 0.00 0.00 0.00 0.00 99.98 18:02:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:02:01 6 0.00 0.00 0.02 0.00 0.00 99.98 18:02:01 7 0.02 0.00 0.00 0.00 0.00 99.98 18:03:01 all 0.01 0.00 0.00 0.00 0.00 99.98 18:03:01 0 0.03 0.00 0.00 0.00 0.02 99.95 18:03:01 1 0.03 0.00 0.02 0.00 0.00 99.95 18:03:01 2 0.02 0.00 0.00 0.02 0.03 99.93 18:03:01 3 0.00 0.00 0.00 0.02 0.00 99.98 18:03:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:03:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:03:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:03:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:04:01 all 0.02 0.00 0.00 0.00 0.01 99.97 18:04:01 0 0.00 0.00 0.00 0.00 0.02 99.98 18:04:01 1 0.02 0.00 0.00 0.00 0.02 99.97 18:04:01 2 0.05 0.00 0.02 0.02 0.02 99.90 18:04:01 3 0.02 0.00 0.00 0.00 0.00 99.98 18:04:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:04:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:04:01 6 0.03 0.00 0.02 0.00 0.00 99.95 18:04:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:05:01 all 0.02 0.00 0.00 0.01 0.01 99.96 18:05:01 0 0.03 0.00 0.02 0.00 0.00 99.95 18:05:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:05:01 2 0.03 0.00 0.00 0.02 0.03 99.92 18:05:01 3 0.00 0.00 0.00 0.03 0.00 99.97 18:05:01 4 0.00 0.00 0.00 0.00 0.02 99.98 18:05:01 5 0.02 0.00 0.00 0.00 0.02 99.97 18:05:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:05:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:06:01 all 0.01 0.00 0.01 0.00 0.00 99.97 18:06:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:06:01 1 0.03 0.00 0.00 0.00 0.00 99.97 18:06:01 2 0.00 0.00 0.03 0.03 0.02 99.92 18:06:01 3 0.00 0.00 0.02 0.00 0.00 99.98 18:06:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:06:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:06:01 6 0.02 0.00 0.00 0.00 0.00 99.98 18:06:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:07:01 all 0.02 0.00 0.01 0.00 0.00 99.96 18:07:01 0 0.03 0.00 0.03 0.00 0.02 99.92 18:07:01 1 0.03 0.00 0.02 0.00 0.00 99.95 18:07:01 2 0.02 0.00 0.02 0.03 0.02 99.92 18:07:01 3 0.00 0.00 0.00 0.02 0.00 99.98 18:07:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:07:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:07:01 6 0.02 0.00 0.02 0.00 0.00 99.97 18:07:01 7 0.02 0.00 0.03 0.00 0.02 99.93 18:08:01 all 0.05 0.00 0.02 0.02 0.01 99.90 18:08:01 0 0.05 0.00 0.02 0.00 0.00 99.93 18:08:01 1 0.18 0.00 0.03 0.03 0.02 99.73 18:08:01 2 0.03 0.00 0.02 0.02 0.02 99.92 18:08:01 3 0.00 0.00 0.02 0.08 0.00 99.90 18:08:01 4 0.02 0.00 0.02 0.00 0.00 99.97 18:08:01 5 0.07 0.00 0.07 0.00 0.00 99.87 18:08:01 6 0.02 0.00 0.00 0.00 0.00 99.98 18:08:01 7 0.03 0.00 0.02 0.00 0.00 99.95 18:09:01 all 0.01 0.00 0.01 0.00 0.00 99.97 18:09:01 0 0.02 0.00 0.00 0.00 0.02 99.97 18:09:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:09:01 2 0.02 0.00 0.02 0.03 0.02 99.92 18:09:01 3 0.00 0.00 0.00 0.02 0.00 99.98 18:09:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:09:01 5 0.00 0.00 0.00 0.00 0.02 99.98 18:09:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:09:01 7 0.03 0.00 0.02 0.00 0.00 99.95 18:10:01 all 0.08 0.00 0.00 0.00 0.00 99.90 18:10:01 0 0.02 0.00 0.02 0.00 0.00 99.97 18:10:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:10:01 2 0.02 0.00 0.02 0.02 0.02 99.93 18:10:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:10:01 4 0.02 0.00 0.02 0.00 0.00 99.97 18:10:01 5 0.57 0.00 0.00 0.00 0.00 99.43 18:10:01 6 0.00 0.00 0.02 0.00 0.00 99.98 18:10:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:11:01 all 0.01 0.00 0.01 0.00 0.00 99.98 18:11:01 0 0.03 0.00 0.00 0.00 0.00 99.97 18:11:01 1 0.03 0.00 0.02 0.00 0.00 99.95 18:11:01 2 0.02 0.00 0.02 0.02 0.02 99.93 18:11:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:11:01 4 0.03 0.00 0.00 0.00 0.00 99.97 18:11:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:11:01 6 0.00 0.00 0.02 0.00 0.00 99.98 18:11:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:11:01 CPU %user %nice %system %iowait %steal %idle 18:12:01 all 0.01 0.00 0.01 0.01 0.01 99.97 18:12:01 0 0.00 0.00 0.00 0.00 0.02 99.98 18:12:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:12:01 2 0.02 0.00 0.02 0.03 0.02 99.92 18:12:01 3 0.00 0.00 0.00 0.02 0.00 99.98 18:12:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:12:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:12:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:12:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:13:01 all 0.02 0.00 0.01 0.00 0.00 99.96 18:13:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:13:01 1 0.00 0.00 0.00 0.00 0.02 99.98 18:13:01 2 0.05 0.00 0.03 0.02 0.02 99.88 18:13:01 3 0.00 0.00 0.02 0.00 0.02 99.97 18:13:01 4 0.03 0.00 0.00 0.00 0.00 99.97 18:13:01 5 0.03 0.00 0.00 0.00 0.00 99.97 18:13:01 6 0.02 0.00 0.03 0.02 0.02 99.92 18:13:01 7 0.00 0.00 0.02 0.00 0.00 99.98 18:14:01 all 0.05 0.00 0.01 0.00 0.00 99.93 18:14:01 0 0.02 0.00 0.02 0.00 0.00 99.97 18:14:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:14:01 2 0.03 0.00 0.05 0.03 0.02 99.87 18:14:01 3 0.28 0.00 0.00 0.00 0.00 99.72 18:14:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:14:01 5 0.02 0.00 0.02 0.00 0.02 99.95 18:14:01 6 0.02 0.00 0.02 0.00 0.00 99.97 18:14:01 7 0.03 0.00 0.02 0.00 0.00 99.95 18:15:01 all 0.01 0.00 0.00 0.00 0.00 99.99 18:15:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:15:01 1 0.00 0.00 0.00 0.00 0.00 100.00 18:15:01 2 0.00 0.00 0.00 0.02 0.02 99.97 18:15:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:15:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:15:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:15:01 6 0.03 0.00 0.00 0.00 0.00 99.97 18:15:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:16:01 all 0.01 0.00 0.00 0.00 0.00 99.98 18:16:01 0 0.03 0.00 0.00 0.00 0.02 99.95 18:16:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:16:01 2 0.02 0.00 0.00 0.03 0.02 99.93 18:16:01 3 0.02 0.00 0.00 0.00 0.00 99.98 18:16:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:16:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:16:01 6 0.00 0.00 0.02 0.02 0.02 99.95 18:16:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:17:01 all 0.14 0.00 0.01 0.00 0.00 99.84 18:17:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:17:01 1 0.00 0.00 0.02 0.00 0.00 99.98 18:17:01 2 0.03 0.00 0.03 0.02 0.02 99.90 18:17:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:17:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:17:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:17:01 6 1.08 0.00 0.00 0.00 0.00 98.92 18:17:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:18:01 all 0.12 0.00 0.01 0.00 0.00 99.86 18:18:01 0 0.02 0.00 0.02 0.00 0.00 99.97 18:18:01 1 0.02 0.00 0.03 0.00 0.02 99.93 18:18:01 2 0.00 0.00 0.02 0.03 0.02 99.93 18:18:01 3 0.00 0.00 0.02 0.00 0.00 99.98 18:18:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:18:01 5 0.00 0.00 0.02 0.00 0.02 99.97 18:18:01 6 0.96 0.00 0.00 0.00 0.00 99.04 18:18:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:19:01 all 0.01 0.00 0.01 0.00 0.01 99.97 18:19:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:19:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:19:01 2 0.03 0.00 0.02 0.02 0.02 99.92 18:19:01 3 0.02 0.00 0.00 0.00 0.00 99.98 18:19:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:19:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:19:01 6 0.00 0.00 0.00 0.00 0.02 99.98 18:19:01 7 0.02 0.00 0.02 0.00 0.02 99.95 18:20:01 all 0.01 0.00 0.01 0.00 0.01 99.97 18:20:01 0 0.03 0.00 0.00 0.00 0.00 99.97 18:20:01 1 0.00 0.00 0.02 0.00 0.02 99.97 18:20:01 2 0.05 0.00 0.03 0.03 0.02 99.87 18:20:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:20:01 4 0.00 0.00 0.00 0.00 0.02 99.98 18:20:01 5 0.02 0.00 0.00 0.00 0.02 99.97 18:20:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:20:01 7 0.00 0.00 0.02 0.00 0.00 99.98 18:21:01 all 0.14 0.00 0.01 0.00 0.00 99.85 18:21:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:21:01 1 0.00 0.00 0.02 0.00 0.00 99.98 18:21:01 2 0.02 0.00 0.02 0.03 0.02 99.92 18:21:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:21:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:21:01 5 0.00 0.00 0.00 0.00 0.00 100.00 18:21:01 6 1.03 0.00 0.02 0.00 0.00 98.96 18:21:01 7 0.02 0.00 0.00 0.00 0.00 99.98 18:22:01 all 0.25 0.00 0.01 0.00 0.00 99.73 18:22:01 0 0.03 0.00 0.00 0.00 0.00 99.97 18:22:01 1 0.00 0.00 0.02 0.00 0.00 99.98 18:22:01 2 0.00 0.00 0.03 0.03 0.02 99.92 18:22:01 3 0.00 0.00 0.00 0.00 0.00 100.00 18:22:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:22:01 5 0.00 0.00 0.02 0.00 0.00 99.98 18:22:01 6 1.97 0.00 0.00 0.00 0.02 98.01 18:22:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:22:01 CPU %user %nice %system %iowait %steal %idle 18:23:01 all 0.41 0.00 0.03 0.01 0.01 99.54 18:23:01 0 0.40 0.00 0.03 0.02 0.00 99.55 18:23:01 1 0.07 0.00 0.03 0.02 0.02 99.87 18:23:01 2 0.95 0.00 0.03 0.07 0.02 98.93 18:23:01 3 0.22 0.00 0.03 0.00 0.00 99.75 18:23:01 4 0.05 0.00 0.02 0.00 0.00 99.93 18:23:01 5 0.37 0.00 0.03 0.02 0.02 99.57 18:23:01 6 1.24 0.00 0.03 0.00 0.00 98.73 18:23:01 7 0.05 0.00 0.00 0.00 0.00 99.95 18:24:01 all 0.01 0.00 0.01 0.00 0.00 99.97 18:24:01 0 0.02 0.00 0.02 0.00 0.00 99.97 18:24:01 1 0.03 0.00 0.00 0.00 0.00 99.97 18:24:01 2 0.00 0.00 0.02 0.03 0.00 99.95 18:24:01 3 0.02 0.00 0.02 0.00 0.02 99.95 18:24:01 4 0.02 0.00 0.00 0.00 0.00 99.98 18:24:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:24:01 6 0.02 0.00 0.02 0.00 0.02 99.95 18:24:01 7 0.00 0.00 0.03 0.00 0.00 99.97 18:25:02 all 0.01 0.00 0.00 0.00 0.01 99.97 18:25:02 0 0.02 0.00 0.00 0.00 0.00 99.98 18:25:02 1 0.02 0.00 0.00 0.00 0.00 99.98 18:25:02 2 0.02 0.00 0.00 0.02 0.02 99.95 18:25:02 3 0.02 0.00 0.02 0.00 0.03 99.93 18:25:02 4 0.00 0.00 0.02 0.00 0.00 99.98 18:25:02 5 0.02 0.00 0.00 0.00 0.00 99.98 18:25:02 6 0.00 0.00 0.00 0.00 0.00 100.00 18:25:02 7 0.02 0.00 0.00 0.00 0.00 99.98 18:26:01 all 0.01 0.00 0.01 0.00 0.00 99.98 18:26:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:26:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:26:01 2 0.00 0.00 0.00 0.03 0.00 99.97 18:26:01 3 0.02 0.00 0.00 0.00 0.02 99.97 18:26:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:26:01 5 0.00 0.00 0.02 0.00 0.00 99.98 18:26:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:26:01 7 0.00 0.00 0.00 0.00 0.02 99.98 18:27:01 all 0.01 0.00 0.00 0.00 0.00 99.98 18:27:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:27:01 1 0.02 0.00 0.02 0.00 0.00 99.97 18:27:01 2 0.00 0.00 0.00 0.02 0.00 99.98 18:27:01 3 0.02 0.00 0.02 0.00 0.02 99.95 18:27:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:27:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:27:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:27:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:28:01 all 0.01 0.00 0.01 0.01 0.01 99.97 18:28:01 0 0.07 0.00 0.00 0.00 0.02 99.92 18:28:01 1 0.00 0.00 0.00 0.00 0.02 99.98 18:28:01 2 0.00 0.00 0.00 0.07 0.00 99.93 18:28:01 3 0.00 0.00 0.02 0.00 0.02 99.97 18:28:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:28:01 5 0.02 0.00 0.00 0.00 0.02 99.97 18:28:01 6 0.00 0.00 0.02 0.00 0.00 99.98 18:28:01 7 0.00 0.00 0.00 0.00 0.02 99.98 18:29:01 all 0.02 0.00 0.00 0.00 0.00 99.97 18:29:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:29:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:29:01 2 0.02 0.00 0.02 0.03 0.00 99.93 18:29:01 3 0.03 0.00 0.02 0.00 0.02 99.93 18:29:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:29:01 5 0.00 0.00 0.02 0.00 0.00 99.98 18:29:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:29:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:30:01 all 0.04 0.00 0.01 0.00 0.01 99.94 18:30:01 0 0.25 0.00 0.00 0.00 0.00 99.75 18:30:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:30:01 2 0.00 0.00 0.00 0.02 0.00 99.98 18:30:01 3 0.02 0.00 0.02 0.00 0.02 99.95 18:30:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:30:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:30:01 6 0.00 0.00 0.02 0.00 0.00 99.98 18:30:01 7 0.00 0.00 0.00 0.00 0.02 99.98 18:31:01 all 0.01 0.00 0.01 0.00 0.00 99.98 18:31:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:31:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:31:01 2 0.00 0.00 0.00 0.03 0.00 99.97 18:31:01 3 0.00 0.00 0.03 0.02 0.03 99.92 18:31:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:31:01 5 0.02 0.00 0.02 0.00 0.02 99.95 18:31:01 6 0.02 0.00 0.00 0.00 0.00 99.98 18:31:01 7 0.03 0.00 0.02 0.00 0.00 99.95 18:32:01 all 0.01 0.00 0.00 0.00 0.00 99.97 18:32:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:32:01 1 0.00 0.00 0.00 0.00 0.00 100.00 18:32:01 2 0.00 0.00 0.00 0.02 0.00 99.98 18:32:01 3 0.02 0.00 0.00 0.00 0.02 99.97 18:32:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:32:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:32:01 6 0.00 0.00 0.02 0.00 0.00 99.98 18:32:01 7 0.02 0.00 0.00 0.00 0.00 99.98 18:33:01 all 0.01 0.00 0.00 0.00 0.00 99.98 18:33:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:33:01 1 0.03 0.00 0.02 0.00 0.02 99.93 18:33:01 2 0.00 0.00 0.00 0.02 0.00 99.98 18:33:01 3 0.02 0.00 0.00 0.02 0.02 99.95 18:33:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:33:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:33:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:33:01 7 0.00 0.00 0.00 0.00 0.02 99.98 18:33:01 CPU %user %nice %system %iowait %steal %idle 18:34:01 all 0.02 0.00 0.01 0.00 0.00 99.97 18:34:01 0 0.03 0.00 0.00 0.00 0.00 99.97 18:34:01 1 0.00 0.00 0.00 0.00 0.00 100.00 18:34:01 2 0.02 0.00 0.00 0.02 0.00 99.97 18:34:01 3 0.03 0.00 0.03 0.00 0.02 99.92 18:34:01 4 0.02 0.00 0.02 0.00 0.00 99.97 18:34:01 5 0.03 0.00 0.00 0.00 0.02 99.95 18:34:01 6 0.02 0.00 0.02 0.00 0.00 99.97 18:34:01 7 0.02 0.00 0.00 0.00 0.00 99.98 18:35:01 all 0.01 0.00 0.01 0.00 0.00 99.98 18:35:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:35:01 1 0.02 0.00 0.02 0.00 0.00 99.97 18:35:01 2 0.00 0.00 0.00 0.03 0.02 99.95 18:35:01 3 0.02 0.00 0.02 0.00 0.02 99.95 18:35:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:35:01 5 0.00 0.00 0.02 0.00 0.00 99.98 18:35:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:35:01 7 0.00 0.00 0.02 0.00 0.00 99.98 18:36:01 all 0.01 0.00 0.01 0.00 0.00 99.97 18:36:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:36:01 1 0.02 0.00 0.02 0.00 0.00 99.97 18:36:01 2 0.02 0.00 0.00 0.02 0.00 99.97 18:36:01 3 0.00 0.00 0.02 0.00 0.00 99.98 18:36:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:36:01 5 0.00 0.00 0.00 0.00 0.02 99.98 18:36:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:36:01 7 0.02 0.00 0.02 0.00 0.00 99.97 18:37:01 all 0.01 0.00 0.00 0.00 0.00 99.98 18:37:01 0 0.02 0.00 0.02 0.02 0.00 99.95 18:37:01 1 0.00 0.00 0.00 0.00 0.02 99.98 18:37:01 2 0.00 0.00 0.00 0.02 0.00 99.98 18:37:01 3 0.05 0.00 0.02 0.00 0.02 99.92 18:37:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:37:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:37:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:37:01 7 0.02 0.00 0.00 0.00 0.00 99.98 18:38:01 all 0.02 0.00 0.01 0.00 0.00 99.96 18:38:01 0 0.02 0.00 0.00 0.00 0.02 99.97 18:38:01 1 0.02 0.00 0.02 0.00 0.00 99.97 18:38:01 2 0.05 0.00 0.02 0.02 0.00 99.92 18:38:01 3 0.03 0.00 0.03 0.02 0.02 99.90 18:38:01 4 0.02 0.00 0.00 0.00 0.00 99.98 18:38:01 5 0.03 0.00 0.00 0.00 0.02 99.95 18:38:01 6 0.02 0.00 0.00 0.00 0.00 99.98 18:38:01 7 0.02 0.00 0.02 0.00 0.00 99.97 18:39:01 all 0.01 0.00 0.00 0.00 0.00 99.98 18:39:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:39:01 1 0.02 0.00 0.00 0.00 0.00 99.98 18:39:01 2 0.00 0.00 0.00 0.02 0.00 99.98 18:39:01 3 0.03 0.00 0.00 0.00 0.02 99.95 18:39:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:39:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:39:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:39:01 7 0.00 0.00 0.02 0.00 0.00 99.98 18:40:01 all 0.01 0.00 0.00 0.01 0.00 99.96 18:40:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:40:01 1 0.00 0.00 0.00 0.00 0.00 100.00 18:40:01 2 0.02 0.00 0.00 0.02 0.00 99.97 18:40:01 3 0.02 0.00 0.02 0.08 0.02 99.87 18:40:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:40:01 5 0.02 0.00 0.00 0.00 0.02 99.97 18:40:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:40:01 7 0.02 0.00 0.02 0.00 0.00 99.97 18:41:01 all 0.01 0.00 0.00 0.01 0.00 99.98 18:41:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:41:01 1 0.02 0.00 0.00 0.00 0.02 99.97 18:41:01 2 0.00 0.00 0.02 0.03 0.00 99.95 18:41:01 3 0.02 0.00 0.02 0.02 0.02 99.93 18:41:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:41:01 5 0.02 0.00 0.00 0.00 0.00 99.98 18:41:01 6 0.00 0.00 0.00 0.00 0.00 100.00 18:41:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:42:01 all 0.16 0.00 0.00 0.00 0.00 99.83 18:42:01 0 0.00 0.00 0.02 0.00 0.00 99.98 18:42:01 1 0.00 0.00 0.02 0.00 0.00 99.98 18:42:01 2 0.02 0.00 0.00 0.03 0.00 99.95 18:42:01 3 0.03 0.00 0.02 0.00 0.02 99.93 18:42:01 4 0.00 0.00 0.00 0.00 0.00 100.00 18:42:01 5 1.04 0.00 0.00 0.00 0.02 98.94 18:42:01 6 0.20 0.00 0.00 0.00 0.00 99.80 18:42:01 7 0.03 0.00 0.00 0.00 0.00 99.97 18:43:01 all 0.26 0.00 0.01 0.01 0.00 99.73 18:43:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:43:01 1 0.00 0.00 0.00 0.00 0.00 100.00 18:43:01 2 0.02 0.00 0.02 0.03 0.00 99.93 18:43:01 3 0.00 0.00 0.00 0.02 0.02 99.97 18:43:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:43:01 5 1.97 0.00 0.00 0.00 0.00 98.03 18:43:01 6 0.02 0.00 0.00 0.00 0.02 99.97 18:43:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:44:01 all 0.04 0.00 0.00 0.00 0.00 99.94 18:44:01 0 0.02 0.00 0.00 0.00 0.00 99.98 18:44:01 1 0.00 0.00 0.00 0.00 0.00 100.00 18:44:01 2 0.00 0.00 0.00 0.02 0.00 99.98 18:44:01 3 0.03 0.00 0.00 0.02 0.02 99.93 18:44:01 4 0.00 0.00 0.02 0.00 0.00 99.98 18:44:01 5 0.25 0.00 0.02 0.00 0.02 99.72 18:44:01 6 0.03 0.00 0.02 0.00 0.00 99.95 18:44:01 7 0.00 0.00 0.00 0.00 0.00 100.00 18:44:01 CPU %user %nice %system %iowait %steal %idle 18:45:01 all 0.04 0.00 0.01 0.00 0.00 99.95 18:45:01 0 0.00 0.00 0.00 0.00 0.00 100.00 18:45:01 1 0.00 0.00 0.00 0.00 0.00 100.00 18:45:01 2 0.05 0.00 0.03 0.02 0.02 99.88 18:45:01 3 0.02 0.00 0.02 0.00 0.02 99.95 18:45:01 4 0.00 0.00 0.00 0.00 0.02 99.98 18:45:01 5 0.23 0.00 0.00 0.00 0.00 99.77 18:45:01 6 0.03 0.00 0.00 0.00 0.00 99.97 18:45:01 7 0.02 0.00 0.00 0.00 0.00 99.98 18:46:01 all 3.57 0.00 0.40 0.65 0.02 95.37 18:46:01 0 3.83 0.00 0.07 0.00 0.02 96.08 18:46:01 1 1.62 0.00 0.17 0.00 0.02 98.20 18:46:01 2 0.35 0.00 0.17 0.72 0.00 98.77 18:46:01 3 1.65 0.00 0.25 0.27 0.03 97.80 18:46:01 4 5.01 0.00 1.54 3.87 0.02 89.56 18:46:01 5 9.45 0.00 0.41 0.18 0.02 89.94 18:46:01 6 4.23 0.00 0.30 0.10 0.02 95.35 18:46:01 7 2.34 0.00 0.27 0.03 0.02 97.35 18:47:01 all 10.10 0.00 0.81 1.79 0.03 87.28 18:47:01 0 0.48 0.00 0.15 0.00 0.00 99.37 18:47:01 1 0.58 0.00 0.08 0.00 0.00 99.33 18:47:01 2 0.08 0.00 0.37 3.35 0.02 96.19 18:47:01 3 4.69 0.00 0.82 6.85 0.07 87.58 18:47:01 4 6.26 0.00 0.53 0.55 0.02 92.64 18:47:01 5 46.41 0.00 2.82 3.27 0.10 47.40 18:47:01 6 20.49 0.00 1.33 0.05 0.03 78.09 18:47:01 7 1.74 0.00 0.38 0.27 0.00 97.61 18:48:01 all 12.24 0.00 3.24 1.63 0.06 82.83 18:48:01 0 3.55 0.00 3.65 0.08 0.03 92.69 18:48:01 1 11.04 0.00 2.50 0.32 0.07 86.07 18:48:01 2 11.45 0.00 3.53 0.17 0.03 84.82 18:48:01 3 11.76 0.00 3.16 7.87 0.07 77.14 18:48:01 4 30.19 0.00 3.72 2.57 0.07 63.45 18:48:01 5 16.51 0.00 3.99 0.03 0.07 79.41 18:48:01 6 6.71 0.00 2.63 0.07 0.05 90.54 18:48:01 7 6.74 0.00 2.78 1.99 0.05 88.44 18:49:01 all 3.77 0.00 1.81 15.67 0.04 78.72 18:49:01 0 3.48 0.00 1.44 1.44 0.02 93.62 18:49:01 1 4.50 0.00 1.38 0.00 0.02 94.11 18:49:01 2 3.00 0.00 1.66 0.08 0.03 95.23 18:49:01 3 3.59 0.00 2.37 74.59 0.07 19.38 18:49:01 4 3.76 0.00 1.92 30.87 0.05 63.40 18:49:01 5 2.81 0.00 2.63 0.67 0.02 93.87 18:49:01 6 4.40 0.00 1.70 6.94 0.03 86.93 18:49:01 7 4.60 0.00 1.46 11.42 0.05 82.47 18:50:01 all 12.19 0.00 3.81 7.63 0.07 76.30 18:50:01 0 14.56 0.00 3.92 0.44 0.08 81.00 18:50:01 1 13.33 0.00 4.14 2.80 0.08 79.65 18:50:01 2 11.79 0.00 3.62 1.51 0.07 83.01 18:50:01 3 11.71 0.00 3.74 6.30 0.08 78.16 18:50:01 4 11.37 0.00 4.48 20.16 0.07 63.92 18:50:01 5 11.36 0.00 3.58 3.00 0.08 81.98 18:50:01 6 12.31 0.00 3.40 21.68 0.05 62.56 18:50:01 7 11.02 0.00 3.52 5.24 0.07 80.15 18:51:01 all 25.37 0.00 2.23 0.55 0.08 71.77 18:51:01 0 23.88 0.00 2.26 0.05 0.08 73.73 18:51:01 1 27.08 0.00 2.39 0.08 0.07 70.37 18:51:01 2 28.68 0.00 2.52 3.68 0.08 65.03 18:51:01 3 21.49 0.00 1.96 0.03 0.05 76.47 18:51:01 4 26.36 0.00 2.34 0.55 0.10 70.65 18:51:01 5 25.79 0.00 2.16 0.00 0.07 71.98 18:51:01 6 15.60 0.00 1.04 0.00 0.07 83.29 18:51:01 7 34.10 0.00 3.21 0.03 0.08 62.57 18:52:01 all 1.40 0.00 0.17 0.02 0.04 98.38 18:52:01 0 1.57 0.00 0.18 0.00 0.07 98.18 18:52:01 1 1.65 0.00 0.22 0.05 0.03 98.05 18:52:01 2 1.35 0.00 0.07 0.00 0.02 98.56 18:52:01 3 1.43 0.00 0.22 0.00 0.05 98.30 18:52:01 4 1.00 0.00 0.08 0.10 0.03 98.78 18:52:01 5 1.67 0.00 0.17 0.00 0.02 98.15 18:52:01 6 1.04 0.00 0.22 0.00 0.03 98.71 18:52:01 7 1.47 0.00 0.20 0.00 0.03 98.30 18:53:01 all 2.18 0.00 0.59 0.26 0.03 96.93 18:53:01 0 2.56 0.00 0.48 0.02 0.05 96.89 18:53:01 1 1.97 0.00 0.62 0.18 0.03 97.19 18:53:01 2 4.57 0.00 0.63 0.22 0.03 94.54 18:53:01 3 1.32 0.00 0.47 0.10 0.02 98.09 18:53:01 4 1.35 0.00 0.63 0.77 0.03 97.21 18:53:01 5 1.29 0.00 0.60 0.35 0.05 97.71 18:53:01 6 2.28 0.00 0.50 0.18 0.05 96.98 18:53:01 7 2.08 0.00 0.79 0.25 0.03 96.85 Average: all 1.02 0.00 0.18 0.66 0.01 98.12 Average: 0 0.81 0.00 0.18 0.08 0.01 98.92 Average: 1 0.85 0.00 0.16 0.06 0.01 98.92 Average: 2 0.99 0.00 0.18 1.37 0.01 97.44 Average: 3 0.79 0.00 0.19 1.27 0.01 97.74 Average: 4 1.16 0.00 0.21 1.00 0.01 97.62 Average: 5 1.62 0.00 0.23 0.50 0.01 97.65 Average: 6 1.03 0.00 0.16 0.41 0.01 98.39 Average: 7 0.91 0.00 0.18 0.61 0.01 98.29