Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-997 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-LWJP1nzhhPHF/agent.2825 SSH_AGENT_PID=2827 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2266622403470542149.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_2266622403470542149.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision fbfc234895c48282e2e92b44c8c8b49745e81745 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f fbfc234895c48282e2e92b44c8c8b49745e81745 # timeout=30 Commit message: "Improve CSIT helm charts" > git rev-list --no-walk fbfc234895c48282e2e92b44c8c8b49745e81745 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6470272922785130586.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-tEtV lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-tEtV/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-tEtV/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.2 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.31 botocore==1.34.31 bs4==0.0.2 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.5.0 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.27.0 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.1 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.4 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.2.0 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.4 PyYAML==6.0.1 referencing==0.33.0 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins4706018713246269644.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins15861130723395886183.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.HWPfvHz0nD ++ echo ROBOT_VENV=/tmp/tmp.HWPfvHz0nD +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.HWPfvHz0nD ++ source /tmp/tmp.HWPfvHz0nD/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.HWPfvHz0nD +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.HWPfvHz0nD/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.HWPfvHz0nD) ' '!=' x ']' +++ PS1='(tmp.HWPfvHz0nD) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.HWPfvHz0nD/src/onap ++ rm -rf /tmp/tmp.HWPfvHz0nD/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.HWPfvHz0nD/bin/activate + '[' -z /tmp/tmp.HWPfvHz0nD/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.HWPfvHz0nD/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.HWPfvHz0nD ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.HWPfvHz0nD/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.HWPfvHz0nD) ' ++ '[' 'x(tmp.HWPfvHz0nD) ' '!=' x ']' ++ PS1='(tmp.HWPfvHz0nD) (tmp.HWPfvHz0nD) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.Kzo1TbVgxn + cd /tmp/tmp.Kzo1TbVgxn + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:7567a7c70a3c1d75aeeedc968d1304174a16651e55a60d1fb132a05e1e63a054 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:bedafcd670058dc2d485934eb404bb04ce1a30b23cf7a567427a60ae561f25c7 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:8a0432281bb5edb6d25e3d0e62d78b6aebc2875f52ecd11259251b497208c04e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating mariadb ... Creating prometheus ... Creating compose_zookeeper_1 ... Creating simulator ... Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating simulator ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 10 seconds kafka Up 11 seconds policy-api Up 15 seconds grafana Up 17 seconds simulator Up 14 seconds compose_zookeeper_1 Up 13 seconds prometheus Up 18 seconds mariadb Up 17 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds kafka Up 16 seconds policy-api Up 20 seconds grafana Up 22 seconds simulator Up 19 seconds compose_zookeeper_1 Up 18 seconds prometheus Up 23 seconds mariadb Up 22 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds kafka Up 22 seconds policy-api Up 25 seconds grafana Up 27 seconds simulator Up 24 seconds compose_zookeeper_1 Up 23 seconds prometheus Up 28 seconds mariadb Up 27 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 26 seconds kafka Up 27 seconds policy-api Up 30 seconds grafana Up 32 seconds simulator Up 29 seconds compose_zookeeper_1 Up 28 seconds prometheus Up 33 seconds mariadb Up 32 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds policy-api Up 35 seconds grafana Up 37 seconds simulator Up 34 seconds compose_zookeeper_1 Up 33 seconds prometheus Up 38 seconds mariadb Up 37 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:13:16 up 2:39, 0 users, load average: 3.14, 1.15, 0.43 Tasks: 203 total, 1 running, 130 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.4 us, 0.1 sy, 0.0 ni, 99.3 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.9G 21G 1.3M 6.7G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 31 seconds kafka Up 32 seconds policy-api Up 35 seconds grafana Up 38 seconds simulator Up 34 seconds compose_zookeeper_1 Up 33 seconds prometheus Up 38 seconds mariadb Up 37 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 846915d7a54f policy-apex-pdp 1.06% 179.4MiB / 31.41GiB 0.56% 7.05kB / 6.77kB 0B / 0B 48 9af7b1a3005f policy-pap 1.57% 663.2MiB / 31.41GiB 2.06% 25.5kB / 27.6kB 0B / 180MB 61 a85f1e357d50 kafka 0.65% 359.7MiB / 31.41GiB 1.12% 66.6kB / 69.3kB 0B / 500kB 81 5247b053988a policy-api 0.09% 564.6MiB / 31.41GiB 1.76% 999kB / 710kB 0B / 0B 54 f31725014148 grafana 0.07% 57.49MiB / 31.41GiB 0.18% 13.8kB / 3kB 0B / 24MB 18 8bdbb445e71c simulator 0.06% 124.5MiB / 31.41GiB 0.39% 1.19kB / 0B 0B / 0B 76 a321378a8a85 compose_zookeeper_1 0.10% 98.93MiB / 31.41GiB 0.31% 54.4kB / 48.3kB 0B / 377kB 60 cbc1107bcc77 prometheus 0.05% 18.12MiB / 31.41GiB 0.06% 1.82kB / 158B 205kB / 0B 11 de4d8675694f mariadb 0.01% 101.9MiB / 31.41GiB 0.32% 995kB / 1.19MB 11MB / 68.3MB 37 + echo + cd /tmp/tmp.Kzo1TbVgxn + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.Kzo1TbVgxn/output.xml Log: /tmp/tmp.Kzo1TbVgxn/log.html Report: /tmp/tmp.Kzo1TbVgxn/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes mariadb Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:15:06 up 2:40, 0 users, load average: 0.54, 0.82, 0.39 Tasks: 201 total, 1 running, 128 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.5 us, 0.1 sy, 0.0 ni, 99.3 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 3.1G 21G 1.3M 6.7G 27G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes grafana Up 2 minutes simulator Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes mariadb Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 846915d7a54f policy-apex-pdp 0.30% 183.6MiB / 31.41GiB 0.57% 56.2kB / 90.4kB 0B / 0B 51 9af7b1a3005f policy-pap 0.99% 710.1MiB / 31.41GiB 2.21% 2.33MB / 811kB 0B / 180MB 65 a85f1e357d50 kafka 9.56% 382.8MiB / 31.41GiB 1.19% 237kB / 213kB 0B / 606kB 83 5247b053988a policy-api 0.09% 603.2MiB / 31.41GiB 1.88% 2.49MB / 1.26MB 0B / 0B 55 f31725014148 grafana 0.02% 64.25MiB / 31.41GiB 0.20% 14.5kB / 3.66kB 0B / 24MB 18 8bdbb445e71c simulator 0.06% 124.5MiB / 31.41GiB 0.39% 1.5kB / 0B 0B / 0B 76 a321378a8a85 compose_zookeeper_1 0.07% 98.96MiB / 31.41GiB 0.31% 57.3kB / 49.8kB 0B / 377kB 60 cbc1107bcc77 prometheus 0.09% 24.35MiB / 31.41GiB 0.08% 189kB / 10.9kB 205kB / 0B 14 de4d8675694f mariadb 0.01% 103.3MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.6MB 28 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, kafka, policy-api, policy-db-migrator, grafana, simulator, compose_zookeeper_1, prometheus, mariadb zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-30 23:12:46,414] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,421] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,421] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,421] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,421] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,422] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-30 23:12:46,423] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-30 23:12:46,423] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-30 23:12:46,423] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-30 23:12:46,424] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-30 23:12:46,424] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,424] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,424] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,424] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,425] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-30 23:12:46,425] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-30 23:12:46,439] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-30 23:12:46,444] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-30 23:12:46,444] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-30 23:12:46,447] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-30 23:12:46,459] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,459] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,459] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,459] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,459] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,459] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,459] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,459] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,460] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,460] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:host.name=a321378a8a85 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,461] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,462] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,462] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,462] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,462] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,462] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,462] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,463] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-30 23:12:46,464] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,464] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,465] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-30 23:12:46,465] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-30 23:12:46,466] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-30 23:12:46,466] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-30 23:12:46,466] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-30 23:12:46,466] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-30 23:12:46,466] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-30 23:12:46,466] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-30 23:12:46,469] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,469] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,469] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-30 23:12:46,469] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-30 23:12:46,470] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,494] INFO Logging initialized @558ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-01-30 23:12:46,588] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-30 23:12:46,588] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-30 23:12:46,613] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-30 23:12:46,648] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-30 23:12:46,648] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-30 23:12:46,649] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-30 23:12:46,652] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-01-30 23:12:46,660] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-30 23:12:46,683] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-01-30 23:12:46,683] INFO Started @747ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-30 23:12:46,683] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-01-30 23:12:46,689] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-30 23:12:46,690] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-30 23:12:46,692] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-30 23:12:46,694] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-30 23:12:46,710] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-30 23:12:46,710] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-30 23:12:46,712] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-30 23:12:46,712] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-30 23:12:46,717] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-01-30 23:12:46,717] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-30 23:12:46,720] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-30 23:12:46,720] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-30 23:12:46,721] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-30 23:12:46,730] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-01-30 23:12:46,730] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-01-30 23:12:46,747] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-01-30 23:12:46,748] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-01-30 23:12:48,267] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-01-30 23:12:48,209] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:host.name=a85f1e357d50 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:os.memory.free=494MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,210] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,213] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@23a5fd2 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,216] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-30 23:12:48,220] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-30 23:12:48,226] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:48,245] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:48,245] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:48,255] INFO Socket connection established, initiating session, client: /172.17.0.9:53684, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:48,287] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000091241c0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:48,415] INFO Session: 0x1000091241c0000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:48,415] INFO EventThread shut down for session: 0x1000091241c0000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-01-30 23:12:49,051] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-01-30 23:12:49,345] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-30 23:12:49,410] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-01-30 23:12:49,412] INFO starting (kafka.server.KafkaServer) kafka | [2024-01-30 23:12:49,412] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-01-30 23:12:49,431] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-30 23:12:49,436] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,436] INFO Client environment:host.name=a85f1e357d50 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,437] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,440] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-30 23:12:49,444] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-30 23:12:49,450] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:49,451] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-30 23:12:49,457] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:49,464] INFO Socket connection established, initiating session, client: /172.17.0.9:53686, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:49,472] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x1000091241c0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-30 23:12:49,477] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-30 23:12:49,717] INFO Cluster ID = BqZk-O6TQAORpckjOaIW7A (kafka.server.KafkaServer) kafka | [2024-01-30 23:12:49,719] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-01-30 23:12:49,760] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.4:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.9:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-01-30T23:13:16.072+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-01-30T23:13:16.266+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 9ff8f2a4-20e4-47ce-9646-2a802e941f7c policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-30T23:13:16.415+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-30T23:13:16.416+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-30T23:13:16.416+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656396414 policy-apex-pdp | [2024-01-30T23:13:16.418+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-1, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-30T23:13:16.431+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-30T23:13:16.431+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-01-30T23:13:16.437+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9ff8f2a4-20e4-47ce-9646-2a802e941f7c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-01-30T23:13:16.458+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.5-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 9ff8f2a4-20e4-47ce-9646-2a802e941f7c policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-30T23:13:16.466+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-30T23:13:16.467+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-30T23:13:16.467+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656396466 policy-apex-pdp | [2024-01-30T23:13:16.467+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-30T23:13:16.468+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0180f190-dcb6-486a-ad15-91da06b3ed3a, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-01-30T23:13:16.479+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder kafka | process.roles = [] kafka | producer.id.expiration.check.interval.ms = 600000 kafka | producer.id.expiration.ms = 86400000 kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 kafka | remote.log.manager.task.interval.ms = 30000 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 kafka | remote.log.manager.task.retry.backoff.ms = 500 kafka | remote.log.manager.task.retry.jitter = 0.2 kafka | remote.log.manager.thread.pool.size = 10 kafka | remote.log.metadata.manager.class.name = null kafka | remote.log.metadata.manager.class.path = null kafka | remote.log.metadata.manager.impl.prefix = null kafka | remote.log.metadata.manager.listener.name = null kafka | remote.log.reader.max.pending.tasks = 100 kafka | remote.log.reader.threads = 10 kafka | remote.log.storage.manager.class.name = null kafka | remote.log.storage.manager.class.path = null kafka | remote.log.storage.manager.impl.prefix = null kafka | remote.log.storage.system.enable = false kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retries = 2147483647 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | transaction.timeout.ms = 60000 policy-apex-pdp | transactional.id = null policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | policy-apex-pdp | [2024-01-30T23:13:16.492+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-apex-pdp | [2024-01-30T23:13:16.509+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-30T23:13:16.509+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-30T23:13:16.509+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656396509 policy-apex-pdp | [2024-01-30T23:13:16.509+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=0180f190-dcb6-486a-ad15-91da06b3ed3a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-apex-pdp | [2024-01-30T23:13:16.510+00:00|INFO|ServiceManager|main] service manager starting set alive policy-apex-pdp | [2024-01-30T23:13:16.510+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object policy-apex-pdp | [2024-01-30T23:13:16.512+00:00|INFO|ServiceManager|main] service manager starting topic sinks policy-apex-pdp | [2024-01-30T23:13:16.512+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher policy-apex-pdp | [2024-01-30T23:13:16.514+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener policy-apex-pdp | [2024-01-30T23:13:16.514+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher policy-apex-pdp | [2024-01-30T23:13:16.514+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher policy-apex-pdp | [2024-01-30T23:13:16.515+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9ff8f2a4-20e4-47ce-9646-2a802e941f7c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 policy-apex-pdp | [2024-01-30T23:13:16.515+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9ff8f2a4-20e4-47ce-9646-2a802e941f7c, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted policy-apex-pdp | [2024-01-30T23:13:16.516+00:00|INFO|ServiceManager|main] service manager starting Create REST server policy-apex-pdp | [2024-01-30T23:13:16.531+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: policy-apex-pdp | [] policy-apex-pdp | [2024-01-30T23:13:16.533+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"1a237c56-ef25-4a4a-9e2c-0260681baf9f","timestampMs":1706656396515,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-30T23:13:16.681+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-01-30T23:13:16.681+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-30T23:13:16.681+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-01-30T23:13:16.681+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-01-30T23:13:16.690+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-01-30T23:13:16.691+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-01-30T23:13:16.691+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-apex-pdp | [2024-01-30T23:13:16.691+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING policy-apex-pdp | [2024-01-30T23:13:16.776+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Cluster ID: BqZk-O6TQAORpckjOaIW7A policy-apex-pdp | [2024-01-30T23:13:16.776+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: BqZk-O6TQAORpckjOaIW7A policy-apex-pdp | [2024-01-30T23:13:16.777+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-apex-pdp | [2024-01-30T23:13:16.778+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 policy-apex-pdp | [2024-01-30T23:13:16.785+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] (Re-)joining group policy-apex-pdp | [2024-01-30T23:13:16.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Request joining group due to: need to re-join with the given member-id: consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2-09de679e-d476-424e-b6d1-a15a0d620de2 policy-apex-pdp | [2024-01-30T23:13:16.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-apex-pdp | [2024-01-30T23:13:16.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] (Re-)joining group policy-apex-pdp | [2024-01-30T23:13:17.259+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls policy-apex-pdp | [2024-01-30T23:13:17.261+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls policy-apex-pdp | [2024-01-30T23:13:19.821+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Successfully joined group with generation Generation{generationId=1, memberId='consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2-09de679e-d476-424e-b6d1-a15a0d620de2', protocol='range'} policy-apex-pdp | [2024-01-30T23:13:19.830+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Finished assignment for group at generation 1: {consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2-09de679e-d476-424e-b6d1-a15a0d620de2=Assignment(partitions=[policy-pdp-pap-0])} policy-apex-pdp | [2024-01-30T23:13:19.838+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Successfully synced group in generation Generation{generationId=1, memberId='consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2-09de679e-d476-424e-b6d1-a15a0d620de2', protocol='range'} policy-apex-pdp | [2024-01-30T23:13:19.839+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-apex-pdp | [2024-01-30T23:13:19.841+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Adding newly assigned partitions: policy-pdp-pap-0 policy-apex-pdp | [2024-01-30T23:13:19.861+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Found no committed offset for partition policy-pdp-pap-0 policy-apex-pdp | [2024-01-30T23:13:19.871+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2, groupId=9ff8f2a4-20e4-47ce-9646-2a802e941f7c] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-apex-pdp | [2024-01-30T23:13:36.514+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"cc9a3d7e-6fad-40c4-946d-d865e0c0f98c","timestampMs":1706656416513,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-30T23:13:36.535+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"cc9a3d7e-6fad-40c4-946d-d865e0c0f98c","timestampMs":1706656416513,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-30T23:13:36.538+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-30T23:13:36.674+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"95617564-9902-46ea-a031-5c473077bc58","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.682+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-apex-pdp | [2024-01-30T23:13:36.683+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"14a6ee9f-1ac9-4550-bc77-87a565b4b7f0","timestampMs":1706656416682,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-30T23:13:36.684+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"95617564-9902-46ea-a031-5c473077bc58","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"15f9b549-16ef-482c-8053-79ffdc7adaa7","timestampMs":1706656416684,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.703+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"14a6ee9f-1ac9-4550-bc77-87a565b4b7f0","timestampMs":1706656416682,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-apex-pdp | [2024-01-30T23:13:36.704+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-30T23:13:36.710+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"95617564-9902-46ea-a031-5c473077bc58","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"15f9b549-16ef-482c-8053-79ffdc7adaa7","timestampMs":1706656416684,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.712+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-30T23:13:36.731+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"845e0731-24c1-4793-a94a-51d784453a0e","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.736+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"845e0731-24c1-4793-a94a-51d784453a0e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a3a8701d-f5d8-46f2-8bb5-07d7e08cf634","timestampMs":1706656416735,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.746+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"845e0731-24c1-4793-a94a-51d784453a0e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a3a8701d-f5d8-46f2-8bb5-07d7e08cf634","timestampMs":1706656416735,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.746+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-30T23:13:36.765+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"16179ea5-6b9f-4f52-b894-6f3dc6366661","timestampMs":1706656416749,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.766+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"16179ea5-6b9f-4f52-b894-6f3dc6366661","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c3c51fe5-b799-4ea9-886a-c2cd7f8f53ff","timestampMs":1706656416766,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.774+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"16179ea5-6b9f-4f52-b894-6f3dc6366661","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c3c51fe5-b799-4ea9-886a-c2cd7f8f53ff","timestampMs":1706656416766,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-apex-pdp | [2024-01-30T23:13:36.774+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-apex-pdp | [2024-01-30T23:13:56.158+00:00|INFO|RequestLog|qtp830863979-33] 172.17.0.3 - policyadmin [30/Jan/2024:23:13:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.49.1" policy-apex-pdp | [2024-01-30T23:14:56.079+00:00|INFO|RequestLog|qtp830863979-32] 172.17.0.3 - policyadmin [30/Jan/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.49.1" kafka | sasl.login.class = null kafka | sasl.login.connect.timeout.ms = null kafka | sasl.login.read.timeout.ms = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.login.retry.backoff.max.ms = 10000 kafka | sasl.login.retry.backoff.ms = 100 kafka | sasl.mechanism.controller.protocol = GSSAPI kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.oauthbearer.clock.skew.seconds = 30 kafka | sasl.oauthbearer.expected.audience = null kafka | sasl.oauthbearer.expected.issuer = null kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | sasl.oauthbearer.jwks.endpoint.url = null kafka | sasl.oauthbearer.scope.claim.name = scope kafka | sasl.oauthbearer.sub.claim.name = sub kafka | sasl.oauthbearer.token.endpoint.url = null kafka | sasl.server.callback.handler.class = null kafka | sasl.server.max.receive.size = 524288 kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | server.max.startup.time.ms = 9223372036854775807 kafka | socket.connection.setup.timeout.max.ms = 30000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.listen.backlog.size = 50 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.3 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 2 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 3 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | unstable.api.versions.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = null kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.metadata.migration.enable = false kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | (kafka.server.KafkaConfig) kafka | [2024-01-30 23:12:49,786] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-30 23:12:49,787] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-30 23:12:49,788] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-30 23:12:49,791] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2024-01-30 23:12:49,814] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) kafka | [2024-01-30 23:12:49,818] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) kafka | [2024-01-30 23:12:49,825] INFO Loaded 0 logs in 10ms (kafka.log.LogManager) kafka | [2024-01-30 23:12:49,827] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2024-01-30 23:12:49,827] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2024-01-30 23:12:49,836] INFO Starting the log cleaner (kafka.log.LogCleaner) kafka | [2024-01-30 23:12:49,880] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) kafka | [2024-01-30 23:12:49,895] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2024-01-30 23:12:49,926] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-01-30 23:12:49,947] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-30 23:12:50,275] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-01-30 23:12:50,298] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) kafka | [2024-01-30 23:12:50,299] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2024-01-30 23:12:50,303] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) kafka | [2024-01-30 23:12:50,307] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-30 23:12:50,324] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,326] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,327] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,328] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,340] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2024-01-30 23:12:50,361] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2024-01-30 23:12:50,385] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1706656370376,1706656370376,1,0,0,72058217414000641,258,0,27 kafka | (kafka.zk.KafkaZkClient) kafka | [2024-01-30 23:12:50,385] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) kafka | [2024-01-30 23:12:50,431] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) kafka | [2024-01-30 23:12:50,439] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,447] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,447] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2024-01-30 23:12:50,450] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,455] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,464] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:12:50,468] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,469] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:12:50,473] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2024-01-30 23:12:50,488] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-01-30 23:12:50,491] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-01-30 23:12:50,494] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2024-01-30 23:12:50,502] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) kafka | [2024-01-30 23:12:50,502] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,507] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,511] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,513] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,525] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,525] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2024-01-30 23:12:50,529] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,537] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) kafka | [2024-01-30 23:12:50,542] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) kafka | [2024-01-30 23:12:50,543] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,544] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.4:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.7:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.4) policy-api | policy-api | [2024-01-30T23:12:53.146+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-01-30T23:12:53.147+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-01-30T23:12:54.800+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-01-30T23:12:54.884+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 74 ms. Found 6 JPA repository interfaces. policy-api | [2024-01-30T23:12:55.275+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-30T23:12:55.276+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-30T23:12:55.927+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-01-30T23:12:55.941+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-30T23:12:55.943+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-01-30T23:12:55.943+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-01-30T23:12:56.034+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-01-30T23:12:56.035+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2821 ms policy-api | [2024-01-30T23:12:56.464+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-01-30T23:12:56.534+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-01-30T23:12:56.537+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-01-30T23:12:56.583+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-01-30T23:12:56.902+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-01-30T23:12:56.920+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-01-30T23:12:57.003+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 policy-api | [2024-01-30T23:12:57.005+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-01-30T23:12:57.032+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-01-30T23:12:57.033+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead kafka | [2024-01-30 23:12:50,544] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,544] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,548] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,549] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,549] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,550] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) kafka | [2024-01-30 23:12:50,551] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,555] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) kafka | [2024-01-30 23:12:50,556] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) kafka | [2024-01-30 23:12:50,564] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) kafka | [2024-01-30 23:12:50,565] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-30 23:12:50,567] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-30 23:12:50,567] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) kafka | [2024-01-30 23:12:50,572] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) kafka | [2024-01-30 23:12:50,582] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) kafka | [2024-01-30 23:12:50,583] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-01-30 23:12:50,583] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-01-30 23:12:50,583] INFO Kafka startTimeMs: 1706656370574 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2024-01-30 23:12:50,585] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) kafka | [2024-01-30 23:12:50,586] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-30 23:12:50,587] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) kafka | [2024-01-30 23:12:50,588] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) kafka | [2024-01-30 23:12:50,589] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) kafka | [2024-01-30 23:12:50,593] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) kafka | [2024-01-30 23:12:50,594] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,600] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,600] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,600] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,601] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,602] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,615] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:50,645] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-30 23:12:50,694] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-30 23:12:50,710] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) kafka | [2024-01-30 23:12:55,616] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) kafka | [2024-01-30 23:12:55,616] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) kafka | [2024-01-30 23:13:15,510] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) kafka | [2024-01-30 23:13:15,517] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) kafka | [2024-01-30 23:13:15,517] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-01-30 23:13:15,521] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) kafka | [2024-01-30 23:13:15,569] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(B6KsyJDSTOqeYl8_kE1bXQ),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(k7KpSrR8TmGhJQ-7sqVboQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) kafka | [2024-01-30 23:13:15,570] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,573] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-api | [2024-01-30T23:12:58.780+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-01-30T23:12:58.784+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-01-30T23:13:00.020+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-01-30T23:13:00.853+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-01-30T23:13:01.915+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-01-30T23:13:02.111+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6f3a8d5e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@680f7a5e, org.springframework.security.web.context.SecurityContextHolderFilter@56d3e4a9, org.springframework.security.web.header.HeaderWriterFilter@36c6d53b, org.springframework.security.web.authentication.logout.LogoutFilter@3341ba8e, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@2f84848e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@2542d320, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@66161fee, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@3005133e, org.springframework.security.web.access.ExceptionTranslationFilter@69cf9acb, org.springframework.security.web.access.intercept.AuthorizationFilter@58a01e47] policy-api | [2024-01-30T23:13:02.951+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-01-30T23:13:03.004+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-30T23:13:03.032+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-01-30T23:13:03.052+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.598 seconds (process running for 11.161) policy-api | [2024-01-30T23:13:19.727+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-1] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-01-30T23:13:19.727+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Initializing Servlet 'dispatcherServlet' policy-api | [2024-01-30T23:13:19.728+00:00|INFO|DispatcherServlet|http-nio-6969-exec-1] Completed initialization in 1 ms policy-api | [2024-01-30T23:13:20.000+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-1] ***** OrderedServiceImpl implementers: policy-api | [] mariadb | 2024-01-30 23:12:38+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-30 23:12:39+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-01-30 23:12:39+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-30 23:12:39+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-01-30 23:12:39 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-30 23:12:39 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-30 23:12:39 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-01-30 23:12:40+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-01-30 23:12:40+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-01-30 23:12:40+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-01-30 23:12:40 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 101 ... mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,574] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,575] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,578] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,578] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,578] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,578] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,579] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,579] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,579] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,579] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,580] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,581] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,582] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,584] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,585] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,587] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,588] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,589] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,590] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) kafka | [2024-01-30 23:13:15,590] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,599] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-30 23:12:40 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-30 23:12:40 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-30 23:12:40 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-30 23:12:40 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-01-30 23:12:40 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-30 23:12:40 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-30 23:12:40 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-30 23:12:40 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-30 23:12:40 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-01-30 23:12:41+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-01-30 23:12:43+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-01-30 23:12:43+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | mariadb | 2024-01-30 23:12:43+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | 2024-01-30 23:12:43+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp mariadb | mariadb | 2024-01-30 23:12:44+00:00 [Note] [Entrypoint]: Stopping temporary server mariadb | 2024-01-30 23:12:44 0 [Note] mariadbd (initiated by: unknown): Normal shutdown mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: FTS optimize thread exiting. mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Starting shutdown... mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Buffer pool(s) dump completed at 240130 23:12:44 mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Shutdown completed; log sequence number 347319; transaction id 298 mariadb | 2024-01-30 23:12:44 0 [Note] mariadbd: Shutdown complete mariadb | mariadb | 2024-01-30 23:12:44+00:00 [Note] [Entrypoint]: Temporary server stopped mariadb | mariadb | 2024-01-30 23:12:44+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. mariadb | mariadb | 2024-01-30 23:12:44 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-30 23:12:44 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-30 23:12:44 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-30 23:12:44 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: log sequence number 347319; transaction id 299 mariadb | 2024-01-30 23:12:44 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool mariadb | 2024-01-30 23:12:44 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-30 23:12:44 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. mariadb | 2024-01-30 23:12:44 0 [Note] Server socket created on IP: '0.0.0.0'. mariadb | 2024-01-30 23:12:44 0 [Note] Server socket created on IP: '::'. mariadb | 2024-01-30 23:12:44 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution mariadb | 2024-01-30 23:12:44 0 [Note] InnoDB: Buffer pool(s) load completed at 240130 23:12:44 mariadb | 2024-01-30 23:12:45 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) mariadb | 2024-01-30 23:12:45 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) mariadb | 2024-01-30 23:12:45 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) mariadb | 2024-01-30 23:12:46 28 [Warning] Aborted connection 28 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) grafana | logger=settings t=2024-01-30T23:12:38.877944235Z level=info msg="Starting Grafana" version=10.3.1 commit=00a22ff8b28550d593ec369ba3da1b25780f0a4a branch=HEAD compiled=2024-01-22T18:40:42Z grafana | logger=settings t=2024-01-30T23:12:38.878163731Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-01-30T23:12:38.878174171Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-01-30T23:12:38.878178912Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-01-30T23:12:38.878181902Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-01-30T23:12:38.878184902Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-30T23:12:38.878188732Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-30T23:12:38.878191712Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-01-30T23:12:38.878194712Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-01-30T23:12:38.878197822Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-01-30T23:12:38.878200532Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-30T23:12:38.878203472Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-30T23:12:38.878206282Z level=info msg=Target target=[all] grafana | logger=settings t=2024-01-30T23:12:38.878211032Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-01-30T23:12:38.878213873Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-01-30T23:12:38.878218323Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-01-30T23:12:38.878221083Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-01-30T23:12:38.878223763Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-01-30T23:12:38.878227753Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-01-30T23:12:38.8784883Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-01-30T23:12:38.878505231Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-01-30T23:12:38.879104817Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-01-30T23:12:38.880107305Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-01-30T23:12:38.880840955Z level=info msg="Migration successfully executed" id="create migration_log table" duration=733.57µs grafana | logger=migrator t=2024-01-30T23:12:38.887215472Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-01-30T23:12:38.887718735Z level=info msg="Migration successfully executed" id="create user table" duration=503.083µs grafana | logger=migrator t=2024-01-30T23:12:38.891429479Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-01-30T23:12:38.892664843Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.235114ms grafana | logger=migrator t=2024-01-30T23:12:38.89615533Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-01-30T23:12:38.897528528Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.373409ms grafana | logger=migrator t=2024-01-30T23:12:38.90372452Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-01-30T23:12:38.904515281Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=792.422µs grafana | logger=migrator t=2024-01-30T23:12:38.908134362Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-01-30T23:12:38.909191801Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.062659ms grafana | logger=migrator t=2024-01-30T23:12:38.913167341Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-01-30T23:12:38.917146022Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.979591ms grafana | logger=migrator t=2024-01-30T23:12:38.923023545Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-01-30T23:12:38.923789026Z level=info msg="Migration successfully executed" id="create user table v2" duration=765.64µs grafana | logger=migrator t=2024-01-30T23:12:38.927168149Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-01-30T23:12:38.927926651Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=758.292µs grafana | logger=migrator t=2024-01-30T23:12:38.931258873Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-01-30T23:12:38.932130387Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=871.614µs grafana | logger=migrator t=2024-01-30T23:12:38.937572628Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:38.937987009Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=414.361µs grafana | logger=migrator t=2024-01-30T23:12:38.941106225Z level=info msg="Executing migration" id="Drop old table user_v1" grafana | logger=migrator t=2024-01-30T23:12:38.941639321Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=534.076µs grafana | logger=migrator t=2024-01-30T23:12:38.945413295Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-01-30T23:12:38.947121052Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.706587ms grafana | logger=migrator t=2024-01-30T23:12:38.951195926Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-01-30T23:12:38.95137047Z level=info msg="Migration successfully executed" id="Update user table charset" duration=173.455µs grafana | logger=migrator t=2024-01-30T23:12:38.960085882Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-01-30T23:12:38.961927353Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.842561ms grafana | logger=migrator t=2024-01-30T23:12:38.967334473Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-01-30T23:12:38.967824076Z level=info msg="Migration successfully executed" id="Add missing user data" duration=493.113µs grafana | logger=migrator t=2024-01-30T23:12:38.970885931Z level=info msg="Executing migration" id="Add is_disabled column to user" kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) kafka | [2024-01-30 23:13:15,600] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-01-30 23:13:15,750] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,750] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,750] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,750] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,750] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,750] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,750] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,751] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | Waiting for mariadb port 3306... policy-pap | mariadb (172.17.0.4:3306) open policy-pap | Waiting for kafka port 9092... policy-pap | kafka (172.17.0.9:9092) open policy-pap | Waiting for api port 6969... policy-pap | api (172.17.0.8:6969) open policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json policy-pap | policy-pap | . ____ _ __ _ _ policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / policy-pap | =========|_|==============|___/=/_/_/_/ policy-pap | :: Spring Boot :: (v3.1.7) policy-pap | policy-pap | [2024-01-30T23:13:05.380+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 30 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) policy-pap | [2024-01-30T23:13:05.382+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" policy-pap | [2024-01-30T23:13:07.170+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-pap | [2024-01-30T23:13:07.276+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 96 ms. Found 7 JPA repository interfaces. policy-pap | [2024-01-30T23:13:07.773+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-01-30T23:13:07.774+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler policy-pap | [2024-01-30T23:13:08.449+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-pap | [2024-01-30T23:13:08.457+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-30T23:13:08.459+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-pap | [2024-01-30T23:13:08.460+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] policy-pap | [2024-01-30T23:13:08.554+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext policy-pap | [2024-01-30T23:13:08.554+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3092 ms policy-pap | [2024-01-30T23:13:08.966+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-pap | [2024-01-30T23:13:09.045+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-pap | [2024-01-30T23:13:09.049+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-pap | [2024-01-30T23:13:09.098+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-pap | [2024-01-30T23:13:09.432+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-pap | [2024-01-30T23:13:09.450+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-pap | [2024-01-30T23:13:09.549+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@288ca5f0 policy-pap | [2024-01-30T23:13:09.551+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-pap | [2024-01-30T23:13:09.578+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-pap | [2024-01-30T23:13:09.579+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-pap | [2024-01-30T23:13:11.414+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-pap | [2024-01-30T23:13:11.417+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-pap | [2024-01-30T23:13:11.979+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository policy-pap | [2024-01-30T23:13:12.659+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository policy-pap | [2024-01-30T23:13:12.798+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository policy-pap | [2024-01-30T23:13:13.098+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-af90a869-32d4-41c0-900c-5574709c07e7-1 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = af90a869-32d4-41c0-900c-5574709c07e7 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | Waiting for mariadb port 3306... policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! policy-db-migrator | 321 blocks policy-db-migrator | Preparing upgrade release version: 0800 policy-db-migrator | Preparing upgrade release version: 0900 policy-db-migrator | Preparing upgrade release version: 1000 policy-db-migrator | Preparing upgrade release version: 1100 policy-db-migrator | Preparing upgrade release version: 1200 policy-db-migrator | Preparing upgrade release version: 1300 policy-db-migrator | Done policy-db-migrator | name version policy-db-migrator | policyadmin 0 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 policy-db-migrator | upgrade: 0 -> 1300 policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-01-30T23:13:13.254+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-30T23:13:13.254+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-30T23:13:13.254+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656393252 policy-pap | [2024-01-30T23:13:13.256+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-1, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-01-30T23:13:13.257+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-2 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,752] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-01-30 23:13:15,754] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-01-30 23:13:15,755] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-01-30 23:13:15,756] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-01-30 23:13:15,758] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) kafka | [2024-01-30 23:13:15,761] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,763] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-01-30T23:13:13.263+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-30T23:13:13.263+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-30T23:13:13.263+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656393263 policy-pap | [2024-01-30T23:13:13.263+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap grafana | logger=migrator t=2024-01-30T23:12:38.972582569Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.696658ms grafana | logger=migrator t=2024-01-30T23:12:38.975583712Z level=info msg="Executing migration" id="Add index user.login/user.email" grafana | logger=migrator t=2024-01-30T23:12:38.976394204Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=828.113µs grafana | logger=migrator t=2024-01-30T23:12:38.981462135Z level=info msg="Executing migration" id="Add is_service_account column to user" grafana | logger=migrator t=2024-01-30T23:12:38.983006957Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.544412ms grafana | logger=migrator t=2024-01-30T23:12:38.989768935Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-01-30T23:12:39.002465125Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.69599ms grafana | logger=migrator t=2024-01-30T23:12:39.0365383Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-01-30T23:12:39.037616948Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.080998ms grafana | logger=migrator t=2024-01-30T23:12:39.045190951Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-01-30T23:12:39.046874294Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.678614ms grafana | logger=migrator t=2024-01-30T23:12:39.049938522Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-01-30T23:12:39.0506582Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=719.548µs grafana | logger=migrator t=2024-01-30T23:12:39.05339016Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-01-30T23:12:39.05417357Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=783.15µs grafana | logger=migrator t=2024-01-30T23:12:39.059073255Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-01-30T23:12:39.059829495Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=755.92µs grafana | logger=migrator t=2024-01-30T23:12:39.062730799Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-01-30T23:12:39.062803241Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=73.182µs grafana | logger=migrator t=2024-01-30T23:12:39.06591484Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-01-30T23:12:39.067613333Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.696432ms grafana | logger=migrator t=2024-01-30T23:12:39.073277958Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-01-30T23:12:39.074578331Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.288702ms grafana | logger=migrator t=2024-01-30T23:12:39.078928802Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-01-30T23:12:39.080074561Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.147969ms grafana | logger=migrator t=2024-01-30T23:12:39.083291383Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-01-30T23:12:39.08396788Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=676.527µs grafana | logger=migrator t=2024-01-30T23:12:39.090460106Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-01-30T23:12:39.096711125Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=6.248159ms grafana | logger=migrator t=2024-01-30T23:12:39.100949763Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-01-30T23:12:39.101507368Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=557.365µs grafana | logger=migrator t=2024-01-30T23:12:39.105032388Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-01-30T23:12:39.106165506Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.135888ms grafana | logger=migrator t=2024-01-30T23:12:39.111646426Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-01-30T23:12:39.113112594Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.467248ms grafana | logger=migrator t=2024-01-30T23:12:39.120162743Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-01-30T23:12:39.122344949Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=2.180106ms grafana | logger=migrator t=2024-01-30T23:12:39.127188543Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-01-30T23:12:39.129106651Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.918118ms grafana | logger=migrator t=2024-01-30T23:12:39.134229482Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:39.135066344Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=839.982µs grafana | logger=migrator t=2024-01-30T23:12:39.141852687Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-01-30T23:12:39.142528144Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=676.227µs grafana | logger=migrator t=2024-01-30T23:12:39.147798168Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" grafana | logger=migrator t=2024-01-30T23:12:39.148184358Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=386.4µs grafana | logger=migrator t=2024-01-30T23:12:39.150440586Z level=info msg="Executing migration" id="create star table" grafana | logger=migrator t=2024-01-30T23:12:39.15139259Z level=info msg="Migration successfully executed" id="create star table" duration=951.554µs grafana | logger=migrator t=2024-01-30T23:12:39.157306602Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" grafana | logger=migrator t=2024-01-30T23:12:39.158120692Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=814.881µs grafana | logger=migrator t=2024-01-30T23:12:39.160893123Z level=info msg="Executing migration" id="create org table v1" grafana | logger=migrator t=2024-01-30T23:12:39.161536739Z level=info msg="Migration successfully executed" id="create org table v1" duration=638.236µs policy-pap | [2024-01-30T23:13:13.559+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-pap | [2024-01-30T23:13:13.927+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-01-30T23:13:14.154+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1238a074, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@35b58254, org.springframework.security.web.context.SecurityContextHolderFilter@5e198c40, org.springframework.security.web.header.HeaderWriterFilter@44c2e8a8, org.springframework.security.web.authentication.logout.LogoutFilter@50f13494, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1c3b221f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@39420d59, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5dd227b7, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@73baf7f0, org.springframework.security.web.access.ExceptionTranslationFilter@7120daa6, org.springframework.security.web.access.intercept.AuthorizationFilter@259c6ab8] policy-pap | [2024-01-30T23:13:14.877+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-01-30T23:13:14.970+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] policy-pap | [2024-01-30T23:13:14.985+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-01-30T23:13:15.000+00:00|INFO|ServiceManager|main] Policy PAP starting policy-pap | [2024-01-30T23:13:15.000+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-01-30T23:13:15.001+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters policy-pap | [2024-01-30T23:13:15.001+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-01-30T23:13:15.001+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher policy-pap | [2024-01-30T23:13:15.002+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-01-30T23:13:15.002+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher policy-pap | [2024-01-30T23:13:15.008+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=af90a869-32d4-41c0-900c-5574709c07e7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@17ebbf1e policy-pap | [2024-01-30T23:13:15.017+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=af90a869-32d4-41c0-900c-5574709c07e7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-30T23:13:15.018+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-af90a869-32d4-41c0-900c-5574709c07e7-3 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = af90a869-32d4-41c0-900c-5574709c07e7 policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-30T23:12:39.16625451Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-30T23:12:39.168288261Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=2.031781ms grafana | logger=migrator t=2024-01-30T23:12:39.172859738Z level=info msg="Executing migration" id="create org_user table v1" grafana | logger=migrator t=2024-01-30T23:12:39.173925025Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.065116ms grafana | logger=migrator t=2024-01-30T23:12:39.179600849Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" grafana | logger=migrator t=2024-01-30T23:12:39.180832031Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.232482ms grafana | logger=migrator t=2024-01-30T23:12:39.183943311Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" grafana | logger=migrator t=2024-01-30T23:12:39.184773361Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=829.591µs grafana | logger=migrator t=2024-01-30T23:12:39.190363074Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" grafana | logger=migrator t=2024-01-30T23:12:39.191181326Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=818.292µs grafana | logger=migrator t=2024-01-30T23:12:39.193910565Z level=info msg="Executing migration" id="Update org table charset" grafana | logger=migrator t=2024-01-30T23:12:39.193945176Z level=info msg="Migration successfully executed" id="Update org table charset" duration=23.331µs grafana | logger=migrator t=2024-01-30T23:12:39.19682788Z level=info msg="Executing migration" id="Update org_user table charset" grafana | logger=migrator t=2024-01-30T23:12:39.19685059Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.261µs grafana | logger=migrator t=2024-01-30T23:12:39.199551409Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" grafana | logger=migrator t=2024-01-30T23:12:39.199737604Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=186.195µs grafana | logger=migrator t=2024-01-30T23:12:39.204741431Z level=info msg="Executing migration" id="create dashboard table" grafana | logger=migrator t=2024-01-30T23:12:39.20587775Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.132919ms grafana | logger=migrator t=2024-01-30T23:12:39.209131703Z level=info msg="Executing migration" id="add index dashboard.account_id" grafana | logger=migrator t=2024-01-30T23:12:39.210468187Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.328083ms grafana | logger=migrator t=2024-01-30T23:12:39.213613838Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" grafana | logger=migrator t=2024-01-30T23:12:39.214454409Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=835.491µs grafana | logger=migrator t=2024-01-30T23:12:39.217901897Z level=info msg="Executing migration" id="create dashboard_tag table" grafana | logger=migrator t=2024-01-30T23:12:39.218572934Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=670.857µs grafana | logger=migrator t=2024-01-30T23:12:39.223418817Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" grafana | logger=migrator t=2024-01-30T23:12:39.224193597Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=774.63µs grafana | logger=migrator t=2024-01-30T23:12:39.227161693Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" grafana | logger=migrator t=2024-01-30T23:12:39.227876432Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=714.629µs grafana | logger=migrator t=2024-01-30T23:12:39.231312649Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" grafana | logger=migrator t=2024-01-30T23:12:39.237658051Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.344282ms grafana | logger=migrator t=2024-01-30T23:12:39.242471924Z level=info msg="Executing migration" id="create dashboard v2" grafana | logger=migrator t=2024-01-30T23:12:39.243143091Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=671.317µs grafana | logger=migrator t=2024-01-30T23:12:39.246720682Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" grafana | logger=migrator t=2024-01-30T23:12:39.247444581Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=720.57µs grafana | logger=migrator t=2024-01-30T23:12:39.250215481Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" grafana | logger=migrator t=2024-01-30T23:12:39.250988881Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=773.02µs grafana | logger=migrator t=2024-01-30T23:12:39.256769578Z level=info msg="Executing migration" id="copy dashboard v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:39.257351013Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=584.455µs grafana | logger=migrator t=2024-01-30T23:12:39.262222347Z level=info msg="Executing migration" id="drop table dashboard_v1" grafana | logger=migrator t=2024-01-30T23:12:39.26311324Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=891.253µs grafana | logger=migrator t=2024-01-30T23:12:39.266042075Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" grafana | logger=migrator t=2024-01-30T23:12:39.266129437Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=88.553µs grafana | logger=migrator t=2024-01-30T23:12:39.268707003Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" grafana | logger=migrator t=2024-01-30T23:12:39.270514679Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.806616ms grafana | logger=migrator t=2024-01-30T23:12:39.276235285Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" grafana | logger=migrator t=2024-01-30T23:12:39.277949639Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.714204ms grafana | logger=migrator t=2024-01-30T23:12:39.281007657Z level=info msg="Executing migration" id="Add column gnetId in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.282815072Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.806915ms grafana | logger=migrator t=2024-01-30T23:12:39.286794995Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.287562044Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=767.02µs grafana | logger=migrator t=2024-01-30T23:12:39.293519356Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.295750983Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.234777ms grafana | logger=migrator t=2024-01-30T23:12:39.299368265Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.300149734Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=781.709µs grafana | logger=migrator t=2024-01-30T23:12:39.302670029Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" grafana | logger=migrator t=2024-01-30T23:12:39.303433459Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=762.99µs grafana | logger=migrator t=2024-01-30T23:12:39.308652002Z level=info msg="Executing migration" id="Update dashboard table charset" grafana | logger=migrator t=2024-01-30T23:12:39.308683283Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=32.011µs grafana | logger=migrator t=2024-01-30T23:12:39.311324869Z level=info msg="Executing migration" id="Update dashboard_tag table charset" grafana | logger=migrator t=2024-01-30T23:12:39.311371981Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=39.181µs grafana | logger=migrator t=2024-01-30T23:12:39.313756582Z level=info msg="Executing migration" id="Add column folder_id in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.317013336Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.261994ms grafana | logger=migrator t=2024-01-30T23:12:39.326905427Z level=info msg="Executing migration" id="Add column isFolder in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.329894644Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.988176ms grafana | logger=migrator t=2024-01-30T23:12:39.336545613Z level=info msg="Executing migration" id="Add column has_acl in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.338434151Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.884938ms grafana | logger=migrator t=2024-01-30T23:12:39.34228546Z level=info msg="Executing migration" id="Add column uid in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.344162437Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.874747ms grafana | logger=migrator t=2024-01-30T23:12:39.351193487Z level=info msg="Executing migration" id="Update uid column values in dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.351474674Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=281.397µs grafana | logger=migrator t=2024-01-30T23:12:39.354651915Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" grafana | logger=migrator t=2024-01-30T23:12:39.355785034Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.132639ms grafana | logger=migrator t=2024-01-30T23:12:39.35876604Z level=info msg="Executing migration" id="Remove unique index org_id_slug" grafana | logger=migrator t=2024-01-30T23:12:39.359456178Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=689.878µs grafana | logger=migrator t=2024-01-30T23:12:39.364742083Z level=info msg="Executing migration" id="Update dashboard title length" grafana | logger=migrator t=2024-01-30T23:12:39.364766084Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=24.551µs grafana | logger=migrator t=2024-01-30T23:12:39.368729814Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" grafana | logger=migrator t=2024-01-30T23:12:39.369951735Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.221821ms grafana | logger=migrator t=2024-01-30T23:12:39.375582899Z level=info msg="Executing migration" id="create dashboard_provisioning" grafana | logger=migrator t=2024-01-30T23:12:39.376531344Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=948.405µs grafana | logger=migrator t=2024-01-30T23:12:39.379447838Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" grafana | logger=migrator t=2024-01-30T23:12:39.390009988Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=10.556859ms grafana | logger=migrator t=2024-01-30T23:12:39.396726219Z level=info msg="Executing migration" id="create dashboard_provisioning v2" grafana | logger=migrator t=2024-01-30T23:12:39.397323304Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=597.305µs kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) kafka | [2024-01-30 23:13:15,764] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) kafka | [2024-01-30 23:13:15,769] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-01-30 23:13:15,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,771] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,772] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-01-30T23:13:15.023+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-30T23:13:15.023+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-30T23:13:15.023+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656395023 policy-pap | [2024-01-30T23:13:15.023+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-01-30T23:13:15.023+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher policy-pap | [2024-01-30T23:13:15.023+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=798059c5-2a41-4d37-9e93-8ee87cf07c75, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@59db8216 policy-pap | [2024-01-30T23:13:15.023+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=798059c5-2a41-4d37-9e93-8ee87cf07c75, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-30T23:13:15.024+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = consumer-policy-pap-4 policy-pap | client.rack = policy-pap | connections.max.idle.ms = 540000 policy-pap | default.api.timeout.ms = 60000 policy-pap | enable.auto.commit = true policy-pap | exclude.internal.topics = true policy-pap | fetch.max.bytes = 52428800 policy-pap | fetch.max.wait.ms = 500 policy-pap | fetch.min.bytes = 1 policy-pap | group.id = policy-pap policy-pap | group.instance.id = null policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-30 23:13:15,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,773] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,774] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,775] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,776] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,776] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,776] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,776] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,776] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,776] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,776] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) kafka | [2024-01-30 23:13:15,814] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-01-30 23:13:15,815] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-30T23:12:39.400887624Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" grafana | logger=migrator t=2024-01-30T23:12:39.401638724Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=751.15µs grafana | logger=migrator t=2024-01-30T23:12:39.40542578Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" grafana | logger=migrator t=2024-01-30T23:12:39.406674163Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.247663ms grafana | logger=migrator t=2024-01-30T23:12:39.411228028Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:39.41168375Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=447.632µs grafana | logger=migrator t=2024-01-30T23:12:39.414408959Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" grafana | logger=migrator t=2024-01-30T23:12:39.414945713Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=536.554µs grafana | logger=migrator t=2024-01-30T23:12:39.418904564Z level=info msg="Executing migration" id="Add check_sum column" grafana | logger=migrator t=2024-01-30T23:12:39.420813073Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.906299ms grafana | logger=migrator t=2024-01-30T23:12:39.428079138Z level=info msg="Executing migration" id="Add index for dashboard_title" grafana | logger=migrator t=2024-01-30T23:12:39.428837127Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=757.489µs grafana | logger=migrator t=2024-01-30T23:12:39.431770162Z level=info msg="Executing migration" id="delete tags for deleted dashboards" grafana | logger=migrator t=2024-01-30T23:12:39.43204133Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=264.688µs grafana | logger=migrator t=2024-01-30T23:12:39.436029081Z level=info msg="Executing migration" id="delete stars for deleted dashboards" grafana | logger=migrator t=2024-01-30T23:12:39.436281457Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=252.606µs grafana | logger=migrator t=2024-01-30T23:12:39.440949327Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" grafana | logger=migrator t=2024-01-30T23:12:39.442337692Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.387675ms grafana | logger=migrator t=2024-01-30T23:12:39.446292553Z level=info msg="Executing migration" id="Add isPublic for dashboard" grafana | logger=migrator t=2024-01-30T23:12:39.449601668Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.308824ms grafana | logger=migrator t=2024-01-30T23:12:39.454182784Z level=info msg="Executing migration" id="create data_source table" policy-pap | interceptor.classes = [] policy-pap | internal.leave.group.on.close = true policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-pap | isolation.level = read_uncommitted policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | max.partition.fetch.bytes = 1048576 policy-pap | max.poll.interval.ms = 300000 policy-pap | max.poll.records = 500 policy-pap | metadata.max.age.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-01-30 23:13:15,816] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-01-30 23:13:15,817] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-01-30 23:13:15,818] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-01-30 23:13:15,819] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-01-30 23:13:15,819] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) grafana | logger=migrator t=2024-01-30T23:12:39.455267042Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.084958ms grafana | logger=migrator t=2024-01-30T23:12:39.461162162Z level=info msg="Executing migration" id="add index data_source.account_id" grafana | logger=migrator t=2024-01-30T23:12:39.46187596Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=715.058µs grafana | logger=migrator t=2024-01-30T23:12:39.466070448Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" grafana | logger=migrator t=2024-01-30T23:12:39.467314979Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.244191ms grafana | logger=migrator t=2024-01-30T23:12:39.471100246Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" grafana | logger=migrator t=2024-01-30T23:12:39.472227075Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.126919ms grafana | logger=migrator t=2024-01-30T23:12:39.476997407Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" grafana | logger=migrator t=2024-01-30T23:12:39.477688964Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=691.117µs grafana | logger=migrator t=2024-01-30T23:12:39.481355187Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" grafana | logger=migrator t=2024-01-30T23:12:39.49244382Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.089203ms grafana | logger=migrator t=2024-01-30T23:12:39.495648932Z level=info msg="Executing migration" id="create data_source table v2" grafana | logger=migrator t=2024-01-30T23:12:39.496182335Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=530.063µs grafana | logger=migrator t=2024-01-30T23:12:39.500737532Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" grafana | logger=migrator t=2024-01-30T23:12:39.502050216Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.312874ms grafana | logger=migrator t=2024-01-30T23:12:39.506466638Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" grafana | logger=migrator t=2024-01-30T23:12:39.508000127Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.533399ms grafana | logger=migrator t=2024-01-30T23:12:39.51163658Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" grafana | logger=migrator t=2024-01-30T23:12:39.512369368Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=733.068µs grafana | logger=migrator t=2024-01-30T23:12:39.517790097Z level=info msg="Executing migration" id="Add column with_credentials" grafana | logger=migrator t=2024-01-30T23:12:39.520109746Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.32404ms grafana | logger=migrator t=2024-01-30T23:12:39.525253797Z level=info msg="Executing migration" id="Add secure json data column" grafana | logger=migrator t=2024-01-30T23:12:39.527469424Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.214907ms grafana | logger=migrator t=2024-01-30T23:12:39.530272406Z level=info msg="Executing migration" id="Update data_source table charset" grafana | logger=migrator t=2024-01-30T23:12:39.530305426Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=33.73µs grafana | logger=migrator t=2024-01-30T23:12:39.53632335Z level=info msg="Executing migration" id="Update initial version to 1" grafana | logger=migrator t=2024-01-30T23:12:39.536837133Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=513.713µs grafana | logger=migrator t=2024-01-30T23:12:39.539942372Z level=info msg="Executing migration" id="Add read_only data column" grafana | logger=migrator t=2024-01-30T23:12:39.543610455Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.666973ms grafana | logger=migrator t=2024-01-30T23:12:39.548469059Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" grafana | logger=migrator t=2024-01-30T23:12:39.548642214Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=167.754µs grafana | logger=migrator t=2024-01-30T23:12:39.551486366Z level=info msg="Executing migration" id="Update json_data with nulls" grafana | logger=migrator t=2024-01-30T23:12:39.55163591Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=153.504µs grafana | logger=migrator t=2024-01-30T23:12:39.556982926Z level=info msg="Executing migration" id="Add uid column" grafana | logger=migrator t=2024-01-30T23:12:39.560511836Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.52857ms grafana | logger=migrator t=2024-01-30T23:12:39.56340735Z level=info msg="Executing migration" id="Update uid value" grafana | logger=migrator t=2024-01-30T23:12:39.563668798Z level=info msg="Migration successfully executed" id="Update uid value" duration=261.608µs grafana | logger=migrator t=2024-01-30T23:12:39.566952101Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" grafana | logger=migrator t=2024-01-30T23:12:39.567745061Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=792.62µs grafana | logger=migrator t=2024-01-30T23:12:39.573425676Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" grafana | logger=migrator t=2024-01-30T23:12:39.574983406Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.5511ms grafana | logger=migrator t=2024-01-30T23:12:39.579364997Z level=info msg="Executing migration" id="create api_key table" grafana | logger=migrator t=2024-01-30T23:12:39.580419325Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.042178ms grafana | logger=migrator t=2024-01-30T23:12:39.584918649Z level=info msg="Executing migration" id="add index api_key.account_id" grafana | logger=migrator t=2024-01-30T23:12:39.585642967Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=724.198µs grafana | logger=migrator t=2024-01-30T23:12:39.59125263Z level=info msg="Executing migration" id="add index api_key.key" grafana | logger=migrator t=2024-01-30T23:12:39.592650827Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.397576ms grafana | logger=migrator t=2024-01-30T23:12:39.596167796Z level=info msg="Executing migration" id="add index api_key.account_id_name" grafana | logger=migrator t=2024-01-30T23:12:39.597428479Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.260463ms grafana | logger=migrator t=2024-01-30T23:12:39.605073593Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" grafana | logger=migrator t=2024-01-30T23:12:39.605796282Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=722.639µs grafana | logger=migrator t=2024-01-30T23:12:39.609930177Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" grafana | logger=migrator t=2024-01-30T23:12:39.61121398Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.283403ms policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-30T23:12:39.615093429Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" grafana | logger=migrator t=2024-01-30T23:12:39.616183946Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.090398ms grafana | logger=migrator t=2024-01-30T23:12:39.620577289Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" grafana | logger=migrator t=2024-01-30T23:12:39.62887839Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.300562ms grafana | logger=migrator t=2024-01-30T23:12:39.633473386Z level=info msg="Executing migration" id="create api_key table v2" grafana | logger=migrator t=2024-01-30T23:12:39.634113723Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=640.107µs grafana | logger=migrator t=2024-01-30T23:12:39.63751847Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" grafana | logger=migrator t=2024-01-30T23:12:39.638267909Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=749.209µs grafana | logger=migrator t=2024-01-30T23:12:39.641955014Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" grafana | logger=migrator t=2024-01-30T23:12:39.643260436Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.305362ms grafana | logger=migrator t=2024-01-30T23:12:39.647640028Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" grafana | logger=migrator t=2024-01-30T23:12:39.648966662Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.326344ms grafana | logger=migrator t=2024-01-30T23:12:39.652150753Z level=info msg="Executing migration" id="copy api_key v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:39.652761048Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=609.825µs grafana | logger=migrator t=2024-01-30T23:12:39.658677449Z level=info msg="Executing migration" id="Drop old table api_key_v1" grafana | logger=migrator t=2024-01-30T23:12:39.659172882Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=495.223µs grafana | logger=migrator t=2024-01-30T23:12:39.664450587Z level=info msg="Executing migration" id="Update api_key table charset" grafana | logger=migrator t=2024-01-30T23:12:39.664475768Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=29.401µs grafana | logger=migrator t=2024-01-30T23:12:39.669221488Z level=info msg="Executing migration" id="Add expires to api_key table" grafana | logger=migrator t=2024-01-30T23:12:39.673101597Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.879919ms grafana | logger=migrator t=2024-01-30T23:12:39.676564985Z level=info msg="Executing migration" id="Add service account foreign key" grafana | logger=migrator t=2024-01-30T23:12:39.678935156Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.369661ms grafana | logger=migrator t=2024-01-30T23:12:39.683965264Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" grafana | logger=migrator t=2024-01-30T23:12:39.684278592Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=314.058µs grafana | logger=migrator t=2024-01-30T23:12:39.687810392Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-01-30T23:12:39.692290885Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.479703ms grafana | logger=migrator t=2024-01-30T23:12:39.695219221Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-01-30T23:12:39.696933074Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.713763ms grafana | logger=migrator t=2024-01-30T23:12:39.701673235Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-01-30T23:12:39.702779173Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.100418ms grafana | logger=migrator t=2024-01-30T23:12:39.705891882Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-01-30T23:12:39.706765025Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=881.083µs grafana | logger=migrator t=2024-01-30T23:12:39.710570192Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-01-30T23:12:39.711434243Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=852.531µs grafana | logger=migrator t=2024-01-30T23:12:39.715270102Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-01-30T23:12:39.716040641Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=770.089µs grafana | logger=migrator t=2024-01-30T23:12:39.719897199Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-01-30T23:12:39.720658649Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=761.28µs grafana | logger=migrator t=2024-01-30T23:12:39.726406296Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-01-30T23:12:39.72776753Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.352684ms grafana | logger=migrator t=2024-01-30T23:12:39.737335074Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-01-30T23:12:39.737491448Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=152.044µs grafana | logger=migrator t=2024-01-30T23:12:39.741174702Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-01-30T23:12:39.741215593Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=43.261µs grafana | logger=migrator t=2024-01-30T23:12:39.746037596Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-01-30T23:12:39.748798387Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.760781ms grafana | logger=migrator t=2024-01-30T23:12:39.752989713Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-01-30T23:12:39.755687422Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.697408ms grafana | logger=migrator t=2024-01-30T23:12:39.76033283Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-01-30T23:12:39.760396262Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=64.162µs grafana | logger=migrator t=2024-01-30T23:12:39.76265093Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-01-30T23:12:39.763700176Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.049186ms policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0470-pdp.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-30T23:12:39.767124223Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" grafana | logger=migrator t=2024-01-30T23:12:39.767976755Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=852.172µs grafana | logger=migrator t=2024-01-30T23:12:39.773428413Z level=info msg="Executing migration" id="Update quota table charset" grafana | logger=migrator t=2024-01-30T23:12:39.773460824Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=32.341µs grafana | logger=migrator t=2024-01-30T23:12:39.778647417Z level=info msg="Executing migration" id="create plugin_setting table" grafana | logger=migrator t=2024-01-30T23:12:39.779382386Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=733.919µs grafana | logger=migrator t=2024-01-30T23:12:39.784010223Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" grafana | logger=migrator t=2024-01-30T23:12:39.785524593Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.513769ms grafana | logger=migrator t=2024-01-30T23:12:39.791380362Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" grafana | logger=migrator t=2024-01-30T23:12:39.796592204Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.203952ms grafana | logger=migrator t=2024-01-30T23:12:39.803836449Z level=info msg="Executing migration" id="Update plugin_setting table charset" grafana | logger=migrator t=2024-01-30T23:12:39.80388401Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=48.821µs grafana | logger=migrator t=2024-01-30T23:12:39.808320813Z level=info msg="Executing migration" id="create session table" grafana | logger=migrator t=2024-01-30T23:12:39.809660167Z level=info msg="Migration successfully executed" id="create session table" duration=1.338594ms grafana | logger=migrator t=2024-01-30T23:12:39.815312161Z level=info msg="Executing migration" id="Drop old table playlist table" grafana | logger=migrator t=2024-01-30T23:12:39.815395453Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=83.822µs grafana | logger=migrator t=2024-01-30T23:12:39.819019696Z level=info msg="Executing migration" id="Drop old table playlist_item table" grafana | logger=migrator t=2024-01-30T23:12:39.819101828Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=79.352µs grafana | logger=migrator t=2024-01-30T23:12:39.824035364Z level=info msg="Executing migration" id="create playlist table v2" grafana | logger=migrator t=2024-01-30T23:12:39.82465096Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=615.506µs grafana | logger=migrator t=2024-01-30T23:12:39.82860746Z level=info msg="Executing migration" id="create playlist item table v2" grafana | logger=migrator t=2024-01-30T23:12:39.82975566Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.14801ms grafana | logger=migrator t=2024-01-30T23:12:39.835755763Z level=info msg="Executing migration" id="Update playlist table charset" grafana | logger=migrator t=2024-01-30T23:12:39.835803674Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=49.541µs grafana | logger=migrator t=2024-01-30T23:12:39.840348049Z level=info msg="Executing migration" id="Update playlist_item table charset" grafana | logger=migrator t=2024-01-30T23:12:39.84038529Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=38.801µs grafana | logger=migrator t=2024-01-30T23:12:39.844302381Z level=info msg="Executing migration" id="Add playlist column created_at" grafana | logger=migrator t=2024-01-30T23:12:39.850196921Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.893841ms grafana | logger=migrator t=2024-01-30T23:12:39.854763197Z level=info msg="Executing migration" id="Add playlist column updated_at" grafana | logger=migrator t=2024-01-30T23:12:39.857848796Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.08005ms grafana | logger=migrator t=2024-01-30T23:12:39.861861688Z level=info msg="Executing migration" id="drop preferences table v2" grafana | logger=migrator t=2024-01-30T23:12:39.86195712Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=96.072µs grafana | logger=migrator t=2024-01-30T23:12:39.867542733Z level=info msg="Executing migration" id="drop preferences table v3" grafana | logger=migrator t=2024-01-30T23:12:39.867703507Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=79.172µs grafana | logger=migrator t=2024-01-30T23:12:39.872239873Z level=info msg="Executing migration" id="create preferences table v3" grafana | logger=migrator t=2024-01-30T23:12:39.873024192Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=783.779µs grafana | logger=migrator t=2024-01-30T23:12:39.878116822Z level=info msg="Executing migration" id="Update preferences table charset" grafana | logger=migrator t=2024-01-30T23:12:39.878142543Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=28.101µs grafana | logger=migrator t=2024-01-30T23:12:39.882639037Z level=info msg="Executing migration" id="Add column team_id in preferences" grafana | logger=migrator t=2024-01-30T23:12:39.886195048Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.558011ms grafana | logger=migrator t=2024-01-30T23:12:39.892347425Z level=info msg="Executing migration" id="Update team_id column values in preferences" grafana | logger=migrator t=2024-01-30T23:12:39.892497689Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=150.914µs grafana | logger=migrator t=2024-01-30T23:12:39.895247669Z level=info msg="Executing migration" id="Add column week_start in preferences" grafana | logger=migrator t=2024-01-30T23:12:39.89841359Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.16576ms grafana | logger=migrator t=2024-01-30T23:12:39.902906954Z level=info msg="Executing migration" id="Add column preferences.json_data" grafana | logger=migrator t=2024-01-30T23:12:39.906356212Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.448928ms grafana | logger=migrator t=2024-01-30T23:12:39.910501707Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" grafana | logger=migrator t=2024-01-30T23:12:39.91062676Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=125.943µs grafana | logger=migrator t=2024-01-30T23:12:39.917129087Z level=info msg="Executing migration" id="Add preferences index org_id" grafana | logger=migrator t=2024-01-30T23:12:39.91806675Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=936.803µs grafana | logger=migrator t=2024-01-30T23:12:39.921858196Z level=info msg="Executing migration" id="Add preferences index user_id" grafana | logger=migrator t=2024-01-30T23:12:39.922712509Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=854.253µs grafana | logger=migrator t=2024-01-30T23:12:39.926139156Z level=info msg="Executing migration" id="create alert table v1" kafka | [2024-01-30 23:13:15,819] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-01-30 23:13:15,819] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-01-30 23:13:15,819] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-01-30 23:13:15,819] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-01-30 23:13:15,820] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) kafka | [2024-01-30 23:13:15,821] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) kafka | [2024-01-30 23:13:15,859] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,870] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,872] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,873] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,875] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,888] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,889] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,889] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,889] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,889] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,896] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,897] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,897] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,897] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,897] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,904] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,904] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,904] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,905] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,905] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,911] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,912] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,912] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,912] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,912] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-30T23:12:39.92750384Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.364734ms grafana | logger=migrator t=2024-01-30T23:12:39.931178145Z level=info msg="Executing migration" id="add index alert org_id & id " grafana | logger=migrator t=2024-01-30T23:12:39.932477267Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.298452ms grafana | logger=migrator t=2024-01-30T23:12:39.936798348Z level=info msg="Executing migration" id="add index alert state" grafana | logger=migrator t=2024-01-30T23:12:39.937815934Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.005745ms grafana | logger=migrator t=2024-01-30T23:12:40.007131797Z level=info msg="Executing migration" id="add index alert dashboard_id" grafana | logger=migrator t=2024-01-30T23:12:40.008389571Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.257874ms grafana | logger=migrator t=2024-01-30T23:12:40.013746051Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" grafana | logger=migrator t=2024-01-30T23:12:40.014851893Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.104881ms grafana | logger=migrator t=2024-01-30T23:12:40.018645388Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" grafana | logger=migrator t=2024-01-30T23:12:40.020046606Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.404009ms grafana | logger=migrator t=2024-01-30T23:12:40.026941798Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" grafana | logger=migrator t=2024-01-30T23:12:40.02774127Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=799.532µs grafana | logger=migrator t=2024-01-30T23:12:40.036045431Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" grafana | logger=migrator t=2024-01-30T23:12:40.05042871Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.38128ms grafana | logger=migrator t=2024-01-30T23:12:40.057217309Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" grafana | logger=migrator t=2024-01-30T23:12:40.058018051Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=799.902µs grafana | logger=migrator t=2024-01-30T23:12:40.062565618Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" grafana | logger=migrator t=2024-01-30T23:12:40.063202775Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=636.947µs grafana | logger=migrator t=2024-01-30T23:12:40.066864637Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:40.067051952Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=187.375µs grafana | logger=migrator t=2024-01-30T23:12:40.072827892Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" grafana | logger=migrator t=2024-01-30T23:12:40.073185642Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=357.53µs grafana | logger=migrator t=2024-01-30T23:12:40.078287664Z level=info msg="Executing migration" id="create alert_notification table v1" grafana | logger=migrator t=2024-01-30T23:12:40.078823298Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=533.754µs grafana | logger=migrator t=2024-01-30T23:12:40.083693514Z level=info msg="Executing migration" id="Add column is_default" grafana | logger=migrator t=2024-01-30T23:12:40.088837847Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.143963ms grafana | logger=migrator t=2024-01-30T23:12:40.093559858Z level=info msg="Executing migration" id="Add column frequency" grafana | logger=migrator t=2024-01-30T23:12:40.096926112Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.372434ms grafana | logger=migrator t=2024-01-30T23:12:40.101721645Z level=info msg="Executing migration" id="Add column send_reminder" grafana | logger=migrator t=2024-01-30T23:12:40.107334021Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.611656ms grafana | logger=migrator t=2024-01-30T23:12:40.11128652Z level=info msg="Executing migration" id="Add column disable_resolve_message" grafana | logger=migrator t=2024-01-30T23:12:40.115641432Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.354762ms grafana | logger=migrator t=2024-01-30T23:12:40.120526947Z level=info msg="Executing migration" id="add index alert_notification org_id & name" grafana | logger=migrator t=2024-01-30T23:12:40.121416682Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=889.575µs grafana | logger=migrator t=2024-01-30T23:12:40.126006029Z level=info msg="Executing migration" id="Update alert table charset" grafana | logger=migrator t=2024-01-30T23:12:40.12603176Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=26.581µs grafana | logger=migrator t=2024-01-30T23:12:40.130782302Z level=info msg="Executing migration" id="Update alert_notification table charset" grafana | logger=migrator t=2024-01-30T23:12:40.130829823Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=49.081µs grafana | logger=migrator t=2024-01-30T23:12:40.134775863Z level=info msg="Executing migration" id="create notification_journal table v1" grafana | logger=migrator t=2024-01-30T23:12:40.135819052Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.04331ms grafana | logger=migrator t=2024-01-30T23:12:40.140702958Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-01-30T23:12:40.143833885Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=3.117026ms grafana | logger=migrator t=2024-01-30T23:12:40.149310087Z level=info msg="Executing migration" id="drop alert_notification_journal" grafana | logger=migrator t=2024-01-30T23:12:40.15013106Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=763.481µs grafana | logger=migrator t=2024-01-30T23:12:40.152976819Z level=info msg="Executing migration" id="create alert_notification_state table v1" grafana | logger=migrator t=2024-01-30T23:12:40.153673878Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=697µs grafana | logger=migrator t=2024-01-30T23:12:40.159321314Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" grafana | logger=migrator t=2024-01-30T23:12:40.161031942Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.708808ms grafana | logger=migrator t=2024-01-30T23:12:40.16597411Z level=info msg="Executing migration" id="Add for to alert table" grafana | logger=migrator t=2024-01-30T23:12:40.170497305Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.526955ms grafana | logger=migrator t=2024-01-30T23:12:40.174577259Z level=info msg="Executing migration" id="Add column uid in alert_notification" grafana | logger=migrator t=2024-01-30T23:12:40.178180469Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.606929ms grafana | logger=migrator t=2024-01-30T23:12:40.183898587Z level=info msg="Executing migration" id="Update uid column values in alert_notification" grafana | logger=migrator t=2024-01-30T23:12:40.184104153Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=205.846µs grafana | logger=migrator t=2024-01-30T23:12:40.189289697Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" grafana | logger=migrator t=2024-01-30T23:12:40.190435559Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.145611ms grafana | logger=migrator t=2024-01-30T23:12:40.195219902Z level=info msg="Executing migration" id="Remove unique index org_id_name" grafana | logger=migrator t=2024-01-30T23:12:40.196363543Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.143441ms grafana | logger=migrator t=2024-01-30T23:12:40.201849355Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" grafana | logger=migrator t=2024-01-30T23:12:40.205446116Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.596601ms grafana | logger=migrator t=2024-01-30T23:12:40.209454787Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" grafana | logger=migrator t=2024-01-30T23:12:40.209520659Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=66.742µs grafana | logger=migrator t=2024-01-30T23:12:40.21349562Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" grafana | logger=migrator t=2024-01-30T23:12:40.214325903Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=830.033µs grafana | logger=migrator t=2024-01-30T23:12:40.217168441Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" grafana | logger=migrator t=2024-01-30T23:12:40.218696003Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.528882ms grafana | logger=migrator t=2024-01-30T23:12:40.223279651Z level=info msg="Executing migration" id="Drop old annotation table v4" grafana | logger=migrator t=2024-01-30T23:12:40.223409105Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=130.054µs grafana | logger=migrator t=2024-01-30T23:12:40.228130556Z level=info msg="Executing migration" id="create annotation table v5" grafana | logger=migrator t=2024-01-30T23:12:40.229120633Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=990.007µs grafana | logger=migrator t=2024-01-30T23:12:40.233574127Z level=info msg="Executing migration" id="add index annotation 0 v3" grafana | logger=migrator t=2024-01-30T23:12:40.234853312Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.278735ms grafana | logger=migrator t=2024-01-30T23:12:40.240097408Z level=info msg="Executing migration" id="add index annotation 1 v3" grafana | logger=migrator t=2024-01-30T23:12:40.240971192Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=873.274µs grafana | logger=migrator t=2024-01-30T23:12:40.24413023Z level=info msg="Executing migration" id="add index annotation 2 v3" grafana | logger=migrator t=2024-01-30T23:12:40.2459451Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.81492ms grafana | logger=migrator t=2024-01-30T23:12:40.249994644Z level=info msg="Executing migration" id="add index annotation 3 v3" grafana | logger=migrator t=2024-01-30T23:12:40.25096138Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=961.017µs grafana | logger=migrator t=2024-01-30T23:12:40.25707396Z level=info msg="Executing migration" id="add index annotation 4 v3" grafana | logger=migrator t=2024-01-30T23:12:40.25887454Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.80007ms grafana | logger=migrator t=2024-01-30T23:12:40.262738517Z level=info msg="Executing migration" id="Update annotation table charset" grafana | logger=migrator t=2024-01-30T23:12:40.262812169Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=74.572µs grafana | logger=migrator t=2024-01-30T23:12:40.26823415Z level=info msg="Executing migration" id="Add column region_id to annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.272248492Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.017241ms grafana | logger=migrator t=2024-01-30T23:12:40.276699576Z level=info msg="Executing migration" id="Drop category_id index" grafana | logger=migrator t=2024-01-30T23:12:40.277473097Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=772.971µs grafana | logger=migrator t=2024-01-30T23:12:40.281011676Z level=info msg="Executing migration" id="Add column tags to annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.286675213Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.662077ms grafana | logger=migrator t=2024-01-30T23:12:40.292406163Z level=info msg="Executing migration" id="Create annotation_tag table v2" grafana | logger=migrator t=2024-01-30T23:12:40.292929247Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=517.274µs grafana | logger=migrator t=2024-01-30T23:12:40.297238867Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" grafana | logger=migrator t=2024-01-30T23:12:40.297973738Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=737.151µs grafana | logger=migrator t=2024-01-30T23:12:40.301510846Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" grafana | logger=migrator t=2024-01-30T23:12:40.302546504Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.035558ms grafana | logger=migrator t=2024-01-30T23:12:40.307346028Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" grafana | logger=migrator t=2024-01-30T23:12:40.322333485Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.986997ms grafana | logger=migrator t=2024-01-30T23:12:40.324991299Z level=info msg="Executing migration" id="Create annotation_tag table v3" grafana | logger=migrator t=2024-01-30T23:12:40.325696739Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=705.31µs grafana | logger=migrator t=2024-01-30T23:12:40.332582461Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" grafana | logger=migrator t=2024-01-30T23:12:40.334659899Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=2.077018ms grafana | logger=migrator t=2024-01-30T23:12:40.341340584Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" grafana | logger=migrator t=2024-01-30T23:12:40.341608802Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=263.927µs policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-pap | policy-pap | [2024-01-30T23:13:15.027+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-30T23:13:15.027+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-30T23:13:15.027+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656395027 policy-pap | [2024-01-30T23:13:15.028+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-01-30T23:13:15.028+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-pap | [2024-01-30T23:13:15.028+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=798059c5-2a41-4d37-9e93-8ee87cf07c75, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-30T23:13:15.028+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=af90a869-32d4-41c0-900c-5574709c07e7, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-pap | [2024-01-30T23:13:15.028+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=df2d82e9-4531-4d79-9121-98638ecd8158, alive=false, publisher=null]]: starting policy-pap | [2024-01-30T23:13:15.042+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-1 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-01-30 23:12:42,595 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-01-30 23:12:42,709 INFO org.onap.policy.models.simulators starting simulator | 2024-01-30 23:12:42,709 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-01-30 23:12:42,943 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-01-30 23:12:42,944 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-01-30 23:12:43,081 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-30 23:12:43,092 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:43,094 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:43,098 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-30 23:12:43,154 INFO Session workerName=node0 simulator | 2024-01-30 23:12:43,762 INFO Using GSON for REST calls simulator | 2024-01-30 23:12:43,837 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} simulator | 2024-01-30 23:12:43,845 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-01-30 23:12:43,852 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1756ms simulator | 2024-01-30 23:12:43,852 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4242 ms. simulator | 2024-01-30 23:12:43,860 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-01-30 23:12:43,865 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-30 23:12:43,865 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:43,871 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:43,872 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-30 23:12:43,885 INFO Session workerName=node0 simulator | 2024-01-30 23:12:43,942 INFO Using GSON for REST calls simulator | 2024-01-30 23:12:43,955 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} simulator | 2024-01-30 23:12:43,956 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-01-30 23:12:43,956 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1860ms kafka | [2024-01-30 23:13:15,920] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,920] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,920] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,920] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,920] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,927] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,927] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,927] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,927] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,928] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,937] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,937] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,938] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,938] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,938] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,946] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,947] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,947] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,947] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,947] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,953] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,954] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,954] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,954] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,954] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,959] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,960] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,960] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,960] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,960] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,966] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,966] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,967] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,967] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,967] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,974] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-01-30T23:13:15.053+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. policy-pap | [2024-01-30T23:13:15.066+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-30T23:13:15.066+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-30T23:13:15.066+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656395066 policy-pap | [2024-01-30T23:13:15.066+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=df2d82e9-4531-4d79-9121-98638ecd8158, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-01-30T23:13:15.066+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=df96c0b5-5500-4331-8719-7f89f9b72da4, alive=false, publisher=null]]: starting policy-pap | [2024-01-30T23:13:15.067+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-pap | acks = -1 policy-pap | auto.include.jmx.reporter = true policy-pap | batch.size = 16384 policy-pap | bootstrap.servers = [kafka:9092] policy-pap | buffer.memory = 33554432 policy-pap | client.dns.lookup = use_all_dns_ips policy-pap | client.id = producer-2 policy-pap | compression.type = none policy-pap | connections.max.idle.ms = 540000 policy-pap | delivery.timeout.ms = 120000 policy-pap | enable.idempotence = true policy-pap | interceptor.classes = [] policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | linger.ms = 0 policy-pap | max.block.ms = 60000 policy-pap | max.in.flight.requests.per.connection = 5 policy-pap | max.request.size = 1048576 policy-pap | metadata.max.age.ms = 300000 policy-pap | metadata.max.idle.ms = 300000 policy-pap | metric.reporters = [] policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-pap | metrics.sample.window.ms = 30000 kafka | [2024-01-30 23:13:15,974] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,975] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,975] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,975] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,980] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,981] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,981] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,981] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,981] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,987] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:15,988] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:15,988] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,988] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:15,988] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:15,999] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,000] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,000] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,000] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,000] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,008] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,008] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,009] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,009] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,009] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,014] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,014] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,014] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,015] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,015] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,020] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,021] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,021] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,021] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,021] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,027] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-db-migrator | -------------- prometheus | ts=2024-01-30T23:12:37.999Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-01-30T23:12:37.999Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" prometheus | ts=2024-01-30T23:12:37.999Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-01-30T23:12:37.999Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-01-30T23:12:37.999Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-01-30T23:12:37.999Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-01-30T23:12:38.003Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-01-30T23:12:38.004Z caller=main.go:1039 level=info msg="Starting TSDB ..." prometheus | ts=2024-01-30T23:12:38.008Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-01-30T23:12:38.008Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-01-30T23:12:38.013Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-01-30T23:12:38.013Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.13µs prometheus | ts=2024-01-30T23:12:38.013Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-01-30T23:12:38.014Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-01-30T23:12:38.014Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=165.414µs wal_replay_duration=990.419µs wbl_replay_duration=320ns total_replay_duration=1.202894ms prometheus | ts=2024-01-30T23:12:38.018Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-01-30T23:12:38.018Z caller=main.go:1063 level=info msg="TSDB started" prometheus | ts=2024-01-30T23:12:38.019Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-01-30T23:12:38.020Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.745828ms db_storage=1.1µs remote_storage=1.56µs web_handler=520ns query_engine=1.23µs scrape=635.077µs scrape_sd=122.604µs notify=42.311µs notify_sd=18.6µs rules=1.52µs tracing=5.01µs prometheus | ts=2024-01-30T23:12:38.020Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-01-30T23:12:38.021Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-db-migrator | -------------- kafka | [2024-01-30 23:13:16,027] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,027] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,028] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,028] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,034] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,034] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,034] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,034] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,035] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,044] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,044] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,044] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,044] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,045] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,051] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,052] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,052] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,052] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,053] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,059] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,060] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,060] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,060] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,060] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,065] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,066] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,066] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,066] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,067] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,074] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,075] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,075] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,075] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,075] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,084] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | partitioner.adaptive.partitioning.enable = true policy-pap | partitioner.availability.timeout.ms = 0 policy-pap | partitioner.class = null policy-pap | partitioner.ignore.keys = false policy-pap | receive.buffer.bytes = 32768 policy-pap | reconnect.backoff.max.ms = 1000 policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 policy-pap | retries = 2147483647 policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null policy-pap | ssl.truststore.type = JKS policy-pap | transaction.timeout.ms = 60000 policy-pap | transactional.id = null policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-pap | policy-pap | [2024-01-30T23:13:15.067+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. policy-pap | [2024-01-30T23:13:15.070+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-pap | [2024-01-30T23:13:15.070+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-30T23:12:40.344652066Z level=info msg="Executing migration" id="drop table annotation_tag_v2" grafana | logger=migrator t=2024-01-30T23:12:40.34512875Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=476.434µs grafana | logger=migrator t=2024-01-30T23:12:40.348683129Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" grafana | logger=migrator t=2024-01-30T23:12:40.349156402Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=464.403µs grafana | logger=migrator t=2024-01-30T23:12:40.353833023Z level=info msg="Executing migration" id="Add created time to annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.36094322Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.109377ms grafana | logger=migrator t=2024-01-30T23:12:40.366482764Z level=info msg="Executing migration" id="Add updated time to annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.371924906Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.441982ms grafana | logger=migrator t=2024-01-30T23:12:40.375568697Z level=info msg="Executing migration" id="Add index for created in annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.376428092Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=859.095µs grafana | logger=migrator t=2024-01-30T23:12:40.37996817Z level=info msg="Executing migration" id="Add index for updated in annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.380787492Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=818.952µs grafana | logger=migrator t=2024-01-30T23:12:40.386270805Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" grafana | logger=migrator t=2024-01-30T23:12:40.386784979Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=514.534µs grafana | logger=migrator t=2024-01-30T23:12:40.389917177Z level=info msg="Executing migration" id="Add epoch_end column" grafana | logger=migrator t=2024-01-30T23:12:40.394778582Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.861575ms grafana | logger=migrator t=2024-01-30T23:12:40.399461472Z level=info msg="Executing migration" id="Add index for epoch_end" grafana | logger=migrator t=2024-01-30T23:12:40.400383838Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=921.876µs grafana | logger=migrator t=2024-01-30T23:12:40.406010395Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" grafana | logger=migrator t=2024-01-30T23:12:40.40619479Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=184.785µs grafana | logger=migrator t=2024-01-30T23:12:40.410334765Z level=info msg="Executing migration" id="Move region to single row" grafana | logger=migrator t=2024-01-30T23:12:40.410934732Z level=info msg="Migration successfully executed" id="Move region to single row" duration=599.787µs grafana | logger=migrator t=2024-01-30T23:12:40.415560431Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.41770206Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=2.14512ms grafana | logger=migrator t=2024-01-30T23:12:40.426904766Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.427721719Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=816.613µs grafana | logger=migrator t=2024-01-30T23:12:40.433234642Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.43532221Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=2.094988ms grafana | logger=migrator t=2024-01-30T23:12:40.439380483Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.440763582Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.382379ms grafana | logger=migrator t=2024-01-30T23:12:40.445299967Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.446625375Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.325678ms grafana | logger=migrator t=2024-01-30T23:12:40.449390892Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" grafana | logger=migrator t=2024-01-30T23:12:40.450802071Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.410449ms grafana | logger=migrator t=2024-01-30T23:12:40.453522806Z level=info msg="Executing migration" id="Increase tags column to length 4096" grafana | logger=migrator t=2024-01-30T23:12:40.453631319Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=97.623µs grafana | logger=migrator t=2024-01-30T23:12:40.458744872Z level=info msg="Executing migration" id="create test_data table" grafana | logger=migrator t=2024-01-30T23:12:40.459637426Z level=info msg="Migration successfully executed" id="create test_data table" duration=892.194µs grafana | logger=migrator t=2024-01-30T23:12:40.462785055Z level=info msg="Executing migration" id="create dashboard_version table v1" grafana | logger=migrator t=2024-01-30T23:12:40.463517355Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=731.94µs grafana | logger=migrator t=2024-01-30T23:12:40.467888736Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" grafana | logger=migrator t=2024-01-30T23:12:40.46872232Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=833.314µs grafana | logger=migrator t=2024-01-30T23:12:40.473565864Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" grafana | logger=migrator t=2024-01-30T23:12:40.47519482Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.627676ms grafana | logger=migrator t=2024-01-30T23:12:40.478368538Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" grafana | logger=migrator t=2024-01-30T23:12:40.478679326Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=310.078µs grafana | logger=migrator t=2024-01-30T23:12:40.48167738Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" grafana | logger=migrator t=2024-01-30T23:12:40.48205104Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=373.55µs grafana | logger=migrator t=2024-01-30T23:12:40.484278243Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" grafana | logger=migrator t=2024-01-30T23:12:40.484366795Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=91.273µs grafana | logger=migrator t=2024-01-30T23:12:40.48886092Z level=info msg="Executing migration" id="create team table" grafana | logger=migrator t=2024-01-30T23:12:40.489514038Z level=info msg="Migration successfully executed" id="create team table" duration=653.018µs grafana | logger=migrator t=2024-01-30T23:12:40.492347517Z level=info msg="Executing migration" id="add index team.org_id" grafana | logger=migrator t=2024-01-30T23:12:40.493270482Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=922.385µs grafana | logger=migrator t=2024-01-30T23:12:40.495979188Z level=info msg="Executing migration" id="add unique index team_org_id_name" grafana | logger=migrator t=2024-01-30T23:12:40.496827672Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=847.984µs grafana | logger=migrator t=2024-01-30T23:12:40.500696429Z level=info msg="Executing migration" id="Add column uid in team" grafana | logger=migrator t=2024-01-30T23:12:40.505101953Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.405924ms grafana | logger=migrator t=2024-01-30T23:12:40.507856459Z level=info msg="Executing migration" id="Update uid column values in team" grafana | logger=migrator t=2024-01-30T23:12:40.508022664Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=166.735µs grafana | logger=migrator t=2024-01-30T23:12:40.510165593Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" grafana | logger=migrator t=2024-01-30T23:12:40.511331926Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.165663ms policy-pap | [2024-01-30T23:13:15.070+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1706656395070 policy-pap | [2024-01-30T23:13:15.070+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=df96c0b5-5500-4331-8719-7f89f9b72da4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-pap | [2024-01-30T23:13:15.070+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-pap | [2024-01-30T23:13:15.070+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-pap | [2024-01-30T23:13:15.072+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-pap | [2024-01-30T23:13:15.072+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-pap | [2024-01-30T23:13:15.074+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-pap | [2024-01-30T23:13:15.074+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-pap | [2024-01-30T23:13:15.076+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-pap | [2024-01-30T23:13:15.076+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-pap | [2024-01-30T23:13:15.076+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-pap | [2024-01-30T23:13:15.079+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-pap | [2024-01-30T23:13:15.080+00:00|INFO|ServiceManager|main] Policy PAP started policy-pap | [2024-01-30T23:13:15.081+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.55 seconds (process running for 11.188) policy-pap | [2024-01-30T23:13:15.494+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: BqZk-O6TQAORpckjOaIW7A policy-pap | [2024-01-30T23:13:15.494+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-pap | [2024-01-30T23:13:15.494+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Cluster ID: BqZk-O6TQAORpckjOaIW7A policy-pap | [2024-01-30T23:13:15.495+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: BqZk-O6TQAORpckjOaIW7A policy-pap | [2024-01-30T23:13:15.531+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-pap | [2024-01-30T23:13:15.531+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-pap | [2024-01-30T23:13:15.554+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:15.554+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: BqZk-O6TQAORpckjOaIW7A policy-pap | [2024-01-30T23:13:15.599+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:15.675+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:15.725+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:15.785+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0670-toscapolicies.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0730-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | [2024-01-30T23:13:15.830+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:15.892+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:15.936+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:15.996+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:16.039+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:16.104+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:16.145+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:16.212+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:16.249+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | [2024-01-30T23:13:16.326+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-01-30T23:13:16.331+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-01-30T23:13:16.355+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-pap | [2024-01-30T23:13:16.358+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] (Re-)joining group policy-pap | [2024-01-30T23:13:16.359+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e1d7455a-b29e-4689-b5f7-83ff5acf22d7 policy-pap | [2024-01-30T23:13:16.359+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-01-30T23:13:16.359+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-pap | [2024-01-30T23:13:16.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Request joining group due to: need to re-join with the given member-id: consumer-af90a869-32d4-41c0-900c-5574709c07e7-3-efedbc9f-87e0-46d1-9edc-7b7ed80c5c82 policy-pap | [2024-01-30T23:13:16.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | [2024-01-30T23:13:16.366+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] (Re-)joining group policy-pap | [2024-01-30T23:13:19.387+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e1d7455a-b29e-4689-b5f7-83ff5acf22d7', protocol='range'} policy-pap | [2024-01-30T23:13:19.391+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Successfully joined group with generation Generation{generationId=1, memberId='consumer-af90a869-32d4-41c0-900c-5574709c07e7-3-efedbc9f-87e0-46d1-9edc-7b7ed80c5c82', protocol='range'} policy-pap | [2024-01-30T23:13:19.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e1d7455a-b29e-4689-b5f7-83ff5acf22d7=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-01-30T23:13:19.400+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Finished assignment for group at generation 1: {consumer-af90a869-32d4-41c0-900c-5574709c07e7-3-efedbc9f-87e0-46d1-9edc-7b7ed80c5c82=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | [2024-01-30T23:13:19.434+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e1d7455a-b29e-4689-b5f7-83ff5acf22d7', protocol='range'} policy-pap | [2024-01-30T23:13:19.434+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Successfully synced group in generation Generation{generationId=1, memberId='consumer-af90a869-32d4-41c0-900c-5574709c07e7-3-efedbc9f-87e0-46d1-9edc-7b7ed80c5c82', protocol='range'} policy-pap | [2024-01-30T23:13:19.435+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-01-30T23:13:19.436+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | [2024-01-30T23:13:19.441+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0770-toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0780-toscarequirements.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0820-toscatrigger.sql policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-30T23:12:40.515518931Z level=info msg="Executing migration" id="create team member table" grafana | logger=migrator t=2024-01-30T23:12:40.516706335Z level=info msg="Migration successfully executed" id="create team member table" duration=1.179574ms grafana | logger=migrator t=2024-01-30T23:12:40.519778911Z level=info msg="Executing migration" id="add index team_member.org_id" grafana | logger=migrator t=2024-01-30T23:12:40.520698346Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=918.965µs grafana | logger=migrator t=2024-01-30T23:12:40.523874634Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" grafana | logger=migrator t=2024-01-30T23:12:40.524918364Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.04785ms grafana | logger=migrator t=2024-01-30T23:12:40.529087479Z level=info msg="Executing migration" id="add index team_member.team_id" grafana | logger=migrator t=2024-01-30T23:12:40.530944011Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.859942ms grafana | logger=migrator t=2024-01-30T23:12:40.534270203Z level=info msg="Executing migration" id="Add column email to team table" grafana | logger=migrator t=2024-01-30T23:12:40.542131823Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.86068ms grafana | logger=migrator t=2024-01-30T23:12:40.546240037Z level=info msg="Executing migration" id="Add column external to team_member table" grafana | logger=migrator t=2024-01-30T23:12:40.550690951Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.450665ms grafana | logger=migrator t=2024-01-30T23:12:40.553707965Z level=info msg="Executing migration" id="Add column permission to team_member table" grafana | logger=migrator t=2024-01-30T23:12:40.558136699Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.428814ms grafana | logger=migrator t=2024-01-30T23:12:40.561016649Z level=info msg="Executing migration" id="create dashboard acl table" grafana | logger=migrator t=2024-01-30T23:12:40.561829281Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=812.162µs grafana | logger=migrator t=2024-01-30T23:12:40.566139261Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" grafana | logger=migrator t=2024-01-30T23:12:40.567004505Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=860.115µs grafana | logger=migrator t=2024-01-30T23:12:40.569895035Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" grafana | logger=migrator t=2024-01-30T23:12:40.570885003Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=989.698µs grafana | logger=migrator t=2024-01-30T23:12:40.575259415Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" grafana | logger=migrator t=2024-01-30T23:12:40.57617991Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=920.005µs grafana | logger=migrator t=2024-01-30T23:12:40.580183761Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" grafana | logger=migrator t=2024-01-30T23:12:40.581039206Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=855.095µs grafana | logger=migrator t=2024-01-30T23:12:40.583568436Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" grafana | logger=migrator t=2024-01-30T23:12:40.58443432Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=868.434µs grafana | logger=migrator t=2024-01-30T23:12:40.58730718Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" grafana | logger=migrator t=2024-01-30T23:12:40.588164414Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=856.784µs grafana | logger=migrator t=2024-01-30T23:12:40.592075493Z level=info msg="Executing migration" id="add index dashboard_permission" grafana | logger=migrator t=2024-01-30T23:12:40.592934077Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=858.184µs grafana | logger=migrator t=2024-01-30T23:12:40.595647322Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" grafana | logger=migrator t=2024-01-30T23:12:40.596097184Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=452.802µs grafana | logger=migrator t=2024-01-30T23:12:40.600444436Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" grafana | logger=migrator t=2024-01-30T23:12:40.600651471Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=206.695µs grafana | logger=migrator t=2024-01-30T23:12:40.603006777Z level=info msg="Executing migration" id="create tag table" grafana | logger=migrator t=2024-01-30T23:12:40.603972014Z level=info msg="Migration successfully executed" id="create tag table" duration=964.947µs grafana | logger=migrator t=2024-01-30T23:12:40.607269636Z level=info msg="Executing migration" id="add index tag.key_value" grafana | logger=migrator t=2024-01-30T23:12:40.608657845Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.385539ms grafana | logger=migrator t=2024-01-30T23:12:40.613104088Z level=info msg="Executing migration" id="create login attempt table" grafana | logger=migrator t=2024-01-30T23:12:40.613751435Z level=info msg="Migration successfully executed" id="create login attempt table" duration=647.127µs grafana | logger=migrator t=2024-01-30T23:12:40.616654186Z level=info msg="Executing migration" id="add index login_attempt.username" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) policy-db-migrator | -------------- policy-db-migrator | grafana | logger=migrator t=2024-01-30T23:12:40.617510711Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=856.165µs grafana | logger=migrator t=2024-01-30T23:12:40.620511454Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" grafana | logger=migrator t=2024-01-30T23:12:40.621355527Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=842.583µs grafana | logger=migrator t=2024-01-30T23:12:40.625410331Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" grafana | logger=migrator t=2024-01-30T23:12:40.64513749Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.726909ms grafana | logger=migrator t=2024-01-30T23:12:40.654309725Z level=info msg="Executing migration" id="create login_attempt v2" grafana | logger=migrator t=2024-01-30T23:12:40.655566689Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.262205ms grafana | logger=migrator t=2024-01-30T23:12:40.663688566Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" grafana | logger=migrator t=2024-01-30T23:12:40.664626752Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=938.077µs grafana | logger=migrator t=2024-01-30T23:12:40.672763028Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:40.673234952Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=467.794µs grafana | logger=migrator t=2024-01-30T23:12:40.682462118Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" grafana | logger=migrator t=2024-01-30T23:12:40.683463426Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.000557ms grafana | logger=migrator t=2024-01-30T23:12:40.691700145Z level=info msg="Executing migration" id="create user auth table" grafana | logger=migrator t=2024-01-30T23:12:40.692998542Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.297537ms grafana | logger=migrator t=2024-01-30T23:12:40.702778303Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" grafana | logger=migrator t=2024-01-30T23:12:40.704273965Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.495572ms grafana | logger=migrator t=2024-01-30T23:12:40.711078934Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" grafana | logger=migrator t=2024-01-30T23:12:40.711224998Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=145.454µs grafana | logger=migrator t=2024-01-30T23:12:40.719479668Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" grafana | logger=migrator t=2024-01-30T23:12:40.727555253Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.075585ms grafana | logger=migrator t=2024-01-30T23:12:40.732979324Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" grafana | logger=migrator t=2024-01-30T23:12:40.740901095Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=7.922811ms grafana | logger=migrator t=2024-01-30T23:12:40.744603218Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" grafana | logger=migrator t=2024-01-30T23:12:40.749472333Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.868716ms grafana | logger=migrator t=2024-01-30T23:12:40.752403195Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" grafana | logger=migrator t=2024-01-30T23:12:40.757329662Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.926157ms grafana | logger=migrator t=2024-01-30T23:12:40.761351354Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" grafana | logger=migrator t=2024-01-30T23:12:40.76232698Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=978.027µs grafana | logger=migrator t=2024-01-30T23:12:40.765223541Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" grafana | logger=migrator t=2024-01-30T23:12:40.770254321Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.0301ms grafana | logger=migrator t=2024-01-30T23:12:40.772695759Z level=info msg="Executing migration" id="create server_lock table" grafana | logger=migrator t=2024-01-30T23:12:40.77344333Z level=info msg="Migration successfully executed" id="create server_lock table" duration=745.531µs grafana | logger=migrator t=2024-01-30T23:12:40.777808412Z level=info msg="Executing migration" id="add index server_lock.operation_uid" grafana | logger=migrator t=2024-01-30T23:12:40.778781179Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=972.366µs grafana | logger=migrator t=2024-01-30T23:12:40.781537735Z level=info msg="Executing migration" id="create user auth token table" grafana | logger=migrator t=2024-01-30T23:12:40.782327178Z level=info msg="Migration successfully executed" id="create user auth token table" duration=789.173µs grafana | logger=migrator t=2024-01-30T23:12:40.785095255Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" grafana | logger=migrator t=2024-01-30T23:12:40.786068522Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=972.727µs grafana | logger=migrator t=2024-01-30T23:12:40.792341636Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" grafana | logger=migrator t=2024-01-30T23:12:40.79389835Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.550854ms grafana | logger=migrator t=2024-01-30T23:12:40.798399694Z level=info msg="Executing migration" id="add index user_auth_token.user_id" grafana | logger=migrator t=2024-01-30T23:12:40.80002827Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.634796ms grafana | logger=migrator t=2024-01-30T23:12:40.803099626Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" grafana | logger=migrator t=2024-01-30T23:12:40.808415744Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.315548ms grafana | logger=migrator t=2024-01-30T23:12:40.812516457Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" grafana | logger=migrator t=2024-01-30T23:12:40.813514246Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=997.629µs grafana | logger=migrator t=2024-01-30T23:12:40.816370165Z level=info msg="Executing migration" id="create cache_data table" grafana | logger=migrator t=2024-01-30T23:12:40.817111486Z level=info msg="Migration successfully executed" id="create cache_data table" duration=735.911µs grafana | logger=migrator t=2024-01-30T23:12:40.821961501Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" grafana | logger=migrator t=2024-01-30T23:12:40.823602566Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.640835ms grafana | logger=migrator t=2024-01-30T23:12:40.828488682Z level=info msg="Executing migration" id="create short_url table v1" policy-db-migrator | policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql simulator | 2024-01-30 23:12:43,956 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4912 ms. simulator | 2024-01-30 23:12:43,957 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-01-30 23:12:43,959 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-30 23:12:43,960 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:43,960 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:43,961 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-30 23:12:43,963 INFO Session workerName=node0 simulator | 2024-01-30 23:12:44,016 INFO Using GSON for REST calls simulator | 2024-01-30 23:12:44,028 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} simulator | 2024-01-30 23:12:44,029 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-01-30 23:12:44,029 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @1934ms simulator | 2024-01-30 23:12:44,030 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4930 ms. simulator | 2024-01-30 23:12:44,030 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-01-30 23:12:44,032 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-30 23:12:44,033 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:44,034 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-30 23:12:44,034 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-30 23:12:44,043 INFO Session workerName=node0 simulator | 2024-01-30 23:12:44,082 INFO Using GSON for REST calls simulator | 2024-01-30 23:12:44,089 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} simulator | 2024-01-30 23:12:44,090 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-01-30 23:12:44,091 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @1995ms simulator | 2024-01-30 23:12:44,091 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4943 ms. simulator | 2024-01-30 23:12:44,092 INFO org.onap.policy.models.simulators started grafana | logger=migrator t=2024-01-30T23:12:40.829378047Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=888.035µs grafana | logger=migrator t=2024-01-30T23:12:40.83268383Z level=info msg="Executing migration" id="add index short_url.org_id-uid" grafana | logger=migrator t=2024-01-30T23:12:40.833944574Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.264905ms grafana | logger=migrator t=2024-01-30T23:12:40.838173122Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" grafana | logger=migrator t=2024-01-30T23:12:40.838256365Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=83.932µs grafana | logger=migrator t=2024-01-30T23:12:40.844361274Z level=info msg="Executing migration" id="delete alert_definition table" grafana | logger=migrator t=2024-01-30T23:12:40.844462937Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=101.883µs grafana | logger=migrator t=2024-01-30T23:12:40.848645493Z level=info msg="Executing migration" id="recreate alert_definition table" grafana | logger=migrator t=2024-01-30T23:12:40.849437276Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=790.202µs grafana | logger=migrator t=2024-01-30T23:12:40.85318908Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-01-30T23:12:40.854193587Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=998.747µs grafana | logger=migrator t=2024-01-30T23:12:40.859579227Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-01-30T23:12:40.861215083Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.634966ms grafana | logger=migrator t=2024-01-30T23:12:40.86758294Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" grafana | logger=migrator t=2024-01-30T23:12:40.867681263Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=99.773µs grafana | logger=migrator t=2024-01-30T23:12:40.870122281Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-01-30T23:12:40.871055667Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=932.766µs grafana | logger=migrator t=2024-01-30T23:12:40.876294103Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-01-30T23:12:40.877444455Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.149483ms grafana | logger=migrator t=2024-01-30T23:12:40.88268649Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" grafana | logger=migrator t=2024-01-30T23:12:40.884136331Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.449741ms grafana | logger=migrator t=2024-01-30T23:12:40.88841699Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" grafana | logger=migrator t=2024-01-30T23:12:40.889348656Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=931.486µs grafana | logger=migrator t=2024-01-30T23:12:40.89524102Z level=info msg="Executing migration" id="Add column paused in alert_definition" grafana | logger=migrator t=2024-01-30T23:12:40.904193919Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=8.930448ms grafana | logger=migrator t=2024-01-30T23:12:40.907149961Z level=info msg="Executing migration" id="drop alert_definition table" grafana | logger=migrator t=2024-01-30T23:12:40.907809109Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=662.818µs grafana | logger=migrator t=2024-01-30T23:12:40.910484374Z level=info msg="Executing migration" id="delete alert_definition_version table" grafana | logger=migrator t=2024-01-30T23:12:40.910544036Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=60.182µs grafana | logger=migrator t=2024-01-30T23:12:40.914837126Z level=info msg="Executing migration" id="recreate alert_definition_version table" grafana | logger=migrator t=2024-01-30T23:12:40.91609229Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.254855ms grafana | logger=migrator t=2024-01-30T23:12:40.918939109Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" grafana | logger=migrator t=2024-01-30T23:12:40.920524484Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.588165ms grafana | logger=migrator t=2024-01-30T23:12:40.923449425Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" grafana | logger=migrator t=2024-01-30T23:12:40.924458723Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.009008ms grafana | logger=migrator t=2024-01-30T23:12:40.928798354Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-01-30T23:12:40.928942098Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=145.204µs grafana | logger=migrator t=2024-01-30T23:12:40.932157397Z level=info msg="Executing migration" id="drop alert_definition_version table" grafana | logger=migrator t=2024-01-30T23:12:40.933628579Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.470722ms grafana | logger=migrator t=2024-01-30T23:12:40.936625312Z level=info msg="Executing migration" id="create alert_instance table" grafana | logger=migrator t=2024-01-30T23:12:40.937444085Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=818.433µs grafana | logger=migrator t=2024-01-30T23:12:40.941689102Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" grafana | logger=migrator t=2024-01-30T23:12:40.942727322Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.03775ms grafana | logger=migrator t=2024-01-30T23:12:40.945471318Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" grafana | logger=migrator t=2024-01-30T23:12:40.946454685Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=982.887µs grafana | logger=migrator t=2024-01-30T23:12:40.950852408Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" grafana | logger=migrator t=2024-01-30T23:12:40.956497545Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.644717ms grafana | logger=migrator t=2024-01-30T23:12:40.959687944Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" grafana | logger=migrator t=2024-01-30T23:12:40.960577638Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=889.584µs grafana | logger=migrator t=2024-01-30T23:12:40.96350355Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-01-30T23:12:40.964377145Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=873.375µs grafana | logger=migrator t=2024-01-30T23:12:40.968737295Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" grafana | logger=migrator t=2024-01-30T23:12:41.004998125Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=36.26116ms grafana | logger=migrator t=2024-01-30T23:12:41.008457592Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" grafana | logger=migrator t=2024-01-30T23:12:41.042935718Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=34.464546ms grafana | logger=migrator t=2024-01-30T23:12:41.046404928Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" grafana | logger=migrator t=2024-01-30T23:12:41.047262779Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=856.991µs grafana | logger=migrator t=2024-01-30T23:12:41.05153543Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" grafana | logger=migrator t=2024-01-30T23:12:41.052451133Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=915.363µs grafana | logger=migrator t=2024-01-30T23:12:41.055369428Z level=info msg="Executing migration" id="add current_reason column related to current_state" grafana | logger=migrator t=2024-01-30T23:12:41.060856519Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.486811ms grafana | logger=migrator t=2024-01-30T23:12:41.063955959Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" grafana | logger=migrator t=2024-01-30T23:12:41.069452451Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.496452ms grafana | logger=migrator t=2024-01-30T23:12:41.07405791Z level=info msg="Executing migration" id="create alert_rule table" grafana | logger=migrator t=2024-01-30T23:12:41.074886311Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=863.692µs grafana | logger=migrator t=2024-01-30T23:12:41.079753116Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" grafana | logger=migrator t=2024-01-30T23:12:41.080722441Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=968.875µs grafana | logger=migrator t=2024-01-30T23:12:41.083923823Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" grafana | logger=migrator t=2024-01-30T23:12:41.084843147Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=918.924µs grafana | logger=migrator t=2024-01-30T23:12:41.08926099Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" grafana | logger=migrator t=2024-01-30T23:12:41.0904147Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.15304ms grafana | logger=migrator t=2024-01-30T23:12:41.093745185Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" grafana | logger=migrator t=2024-01-30T23:12:41.093833717Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=88.782µs policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql policy-db-migrator | -------------- kafka | [2024-01-30 23:13:16,085] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,085] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,085] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,086] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,091] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,093] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,094] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,094] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,094] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,105] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,106] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,106] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,106] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,106] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,112] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,113] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,113] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,113] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,113] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,120] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,120] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,120] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,120] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,121] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,129] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,130] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,130] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,130] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,130] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,140] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,141] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,141] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,141] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,141] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,149] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdp.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | UPDATE jpapdpstatistics_enginestats a policy-db-migrator | JOIN pdpstatistics b policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp policy-db-migrator | SET a.id = b.id policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql kafka | [2024-01-30 23:13:16,149] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,149] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,149] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,149] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,156] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,157] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,157] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,157] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,157] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(B6KsyJDSTOqeYl8_kE1bXQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,162] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,163] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,163] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,163] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,163] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,168] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,169] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,169] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,169] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,169] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,174] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,174] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,174] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,174] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,174] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,179] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,179] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,179] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,179] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,179] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,187] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,187] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,187] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,187] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,187] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,192] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,192] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,192] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0210-sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0220-sequence.sql policy-db-migrator | -------------- policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-db-migrator | -------------- policy-db-migrator | policy-pap | [2024-01-30T23:13:19.441+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | [2024-01-30T23:13:19.467+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-01-30T23:13:19.467+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-pap | [2024-01-30T23:13:19.488+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-af90a869-32d4-41c0-900c-5574709c07e7-3, groupId=af90a869-32d4-41c0-900c-5574709c07e7] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-01-30T23:13:19.489+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | [2024-01-30T23:13:21.466+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-pap | [2024-01-30T23:13:21.466+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' policy-pap | [2024-01-30T23:13:21.469+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 3 ms policy-pap | [2024-01-30T23:13:36.551+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: policy-pap | [] policy-pap | [2024-01-30T23:13:36.552+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"cc9a3d7e-6fad-40c4-946d-d865e0c0f98c","timestampMs":1706656416513,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-pap | [2024-01-30T23:13:36.552+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"cc9a3d7e-6fad-40c4-946d-d865e0c0f98c","timestampMs":1706656416513,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-pap | [2024-01-30T23:13:36.560+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-01-30T23:13:36.639+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting policy-pap | [2024-01-30T23:13:36.639+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting listener policy-pap | [2024-01-30T23:13:36.640+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting timer policy-pap | [2024-01-30T23:13:36.640+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=95617564-9902-46ea-a031-5c473077bc58, expireMs=1706656446640] policy-pap | [2024-01-30T23:13:36.642+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting enqueue policy-pap | [2024-01-30T23:13:36.642+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=95617564-9902-46ea-a031-5c473077bc58, expireMs=1706656446640] policy-pap | [2024-01-30T23:13:36.643+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate started policy-pap | [2024-01-30T23:13:36.644+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"95617564-9902-46ea-a031-5c473077bc58","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.676+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"95617564-9902-46ea-a031-5c473077bc58","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.676+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-01-30T23:13:36.691+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"14a6ee9f-1ac9-4550-bc77-87a565b4b7f0","timestampMs":1706656416682,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-pap | [2024-01-30T23:13:36.691+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"95617564-9902-46ea-a031-5c473077bc58","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.694+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-01-30T23:13:36.697+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"14a6ee9f-1ac9-4550-bc77-87a565b4b7f0","timestampMs":1706656416682,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup"} policy-pap | [2024-01-30T23:13:36.697+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-pap | [2024-01-30T23:13:36.702+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"95617564-9902-46ea-a031-5c473077bc58","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"15f9b549-16ef-482c-8053-79ffdc7adaa7","timestampMs":1706656416684,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.715+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping policy-pap | [2024-01-30T23:13:36.715+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping enqueue policy-pap | [2024-01-30T23:13:36.715+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping timer policy-pap | [2024-01-30T23:13:36.716+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=95617564-9902-46ea-a031-5c473077bc58, expireMs=1706656446640] policy-pap | [2024-01-30T23:13:36.716+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping listener policy-db-migrator | -------------- policy-db-migrator | DROP TABLE IF EXISTS toscaproperty policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-upgrade.sql policy-db-migrator | -------------- policy-db-migrator | select 'upgrade to 1100 completed' as msg policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | msg policy-db-migrator | upgrade to 1100 completed policy-db-migrator | policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql policy-db-migrator | -------------- policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-idx_tsidx1.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-audit_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0130-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-db-migrator | -------------- policy-db-migrator | policy-pap | [2024-01-30T23:13:36.716+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopped policy-pap | [2024-01-30T23:13:36.718+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"95617564-9902-46ea-a031-5c473077bc58","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"15f9b549-16ef-482c-8053-79ffdc7adaa7","timestampMs":1706656416684,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.719+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 95617564-9902-46ea-a031-5c473077bc58 policy-pap | [2024-01-30T23:13:36.721+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate successful policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 start publishing next request policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange starting policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange starting listener policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange starting timer policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=845e0731-24c1-4793-a94a-51d784453a0e, expireMs=1706656446722] policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange starting enqueue policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange started policy-pap | [2024-01-30T23:13:36.722+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=845e0731-24c1-4793-a94a-51d784453a0e, expireMs=1706656446722] policy-pap | [2024-01-30T23:13:36.723+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"845e0731-24c1-4793-a94a-51d784453a0e","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.735+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"845e0731-24c1-4793-a94a-51d784453a0e","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.735+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-01-30T23:13:36.743+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"845e0731-24c1-4793-a94a-51d784453a0e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a3a8701d-f5d8-46f2-8bb5-07d7e08cf634","timestampMs":1706656416735,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.744+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 845e0731-24c1-4793-a94a-51d784453a0e policy-pap | [2024-01-30T23:13:36.755+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"845e0731-24c1-4793-a94a-51d784453a0e","timestampMs":1706656416624,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.755+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"845e0731-24c1-4793-a94a-51d784453a0e","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a3a8701d-f5d8-46f2-8bb5-07d7e08cf634","timestampMs":1706656416735,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange stopping policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange stopping enqueue policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange stopping timer policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=845e0731-24c1-4793-a94a-51d784453a0e, expireMs=1706656446722] policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange stopping listener policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange stopped policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpStateChange successful policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 start publishing next request policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting listener policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting timer policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=16179ea5-6b9f-4f52-b894-6f3dc6366661, expireMs=1706656446758] policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate starting enqueue policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate started policy-pap | [2024-01-30T23:13:36.758+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"16179ea5-6b9f-4f52-b894-6f3dc6366661","timestampMs":1706656416749,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.766+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | TRUNCATE TABLE sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0100-pdpstatistics.sql policy-db-migrator | -------------- policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | -------------- policy-db-migrator | DROP TABLE pdpstatistics policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-db-migrator | -------------- policy-db-migrator | DROP TABLE statistics_sequence policy-db-migrator | -------------- policy-db-migrator | policy-db-migrator | policyadmin: OK: upgrade (1300) policy-db-migrator | name version policy-db-migrator | policyadmin 1300 policy-db-migrator | ID script operation from_version to_version tag success atTime policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:46 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"16179ea5-6b9f-4f52-b894-6f3dc6366661","timestampMs":1706656416749,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.766+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE policy-pap | [2024-01-30T23:13:36.767+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"source":"pap-48dc9faf-b8e7-4d09-9d5b-074862ab777b","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"16179ea5-6b9f-4f52-b894-6f3dc6366661","timestampMs":1706656416749,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.767+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE policy-pap | [2024-01-30T23:13:36.776+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"16179ea5-6b9f-4f52-b894-6f3dc6366661","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c3c51fe5-b799-4ea9-886a-c2cd7f8f53ff","timestampMs":1706656416766,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping policy-pap | [2024-01-30T23:13:36.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping enqueue policy-pap | [2024-01-30T23:13:36.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping timer policy-pap | [2024-01-30T23:13:36.776+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=16179ea5-6b9f-4f52-b894-6f3dc6366661, expireMs=1706656446758] policy-pap | [2024-01-30T23:13:36.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopping listener policy-pap | [2024-01-30T23:13:36.776+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate stopped policy-pap | [2024-01-30T23:13:36.777+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"16179ea5-6b9f-4f52-b894-6f3dc6366661","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"c3c51fe5-b799-4ea9-886a-c2cd7f8f53ff","timestampMs":1706656416766,"name":"apex-7b53246f-ad1b-4a06-8145-df0850258945","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-pap | [2024-01-30T23:13:36.777+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 16179ea5-6b9f-4f52-b894-6f3dc6366661 policy-pap | [2024-01-30T23:13:36.780+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 PdpUpdate successful policy-pap | [2024-01-30T23:13:36.780+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7b53246f-ad1b-4a06-8145-df0850258945 has no more requests policy-pap | [2024-01-30T23:13:42.082+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-01-30T23:13:42.089+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls policy-pap | [2024-01-30T23:13:42.452+00:00|INFO|SessionData|http-nio-6969-exec-9] unknown group testGroup policy-pap | [2024-01-30T23:13:43.019+00:00|INFO|SessionData|http-nio-6969-exec-9] create cached group testGroup policy-pap | [2024-01-30T23:13:43.020+00:00|INFO|SessionData|http-nio-6969-exec-9] creating DB group testGroup policy-pap | [2024-01-30T23:13:43.532+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup policy-pap | [2024-01-30T23:13:43.737+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-01-30T23:13:43.819+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-01-30T23:13:43.819+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group testGroup policy-pap | [2024-01-30T23:13:43.820+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group testGroup policy-pap | [2024-01-30T23:13:43.834+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-30T23:13:43Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-30T23:13:43Z, user=policyadmin)] policy-pap | [2024-01-30T23:13:44.532+00:00|INFO|SessionData|http-nio-6969-exec-3] cache group testGroup policy-pap | [2024-01-30T23:13:44.533+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-3] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 policy-pap | [2024-01-30T23:13:44.533+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] Registering an undeploy for policy onap.restart.tca 1.0.0 policy-pap | [2024-01-30T23:13:44.533+00:00|INFO|SessionData|http-nio-6969-exec-3] update cached group testGroup policy-pap | [2024-01-30T23:13:44.534+00:00|INFO|SessionData|http-nio-6969-exec-3] updating DB group testGroup policy-pap | [2024-01-30T23:13:44.544+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-3] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-30T23:13:44Z, user=policyadmin)] policy-pap | [2024-01-30T23:13:44.856+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group defaultGroup policy-pap | [2024-01-30T23:13:44.856+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup policy-pap | [2024-01-30T23:13:44.856+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 policy-pap | [2024-01-30T23:13:44.856+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 policy-pap | [2024-01-30T23:13:44.856+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup policy-pap | [2024-01-30T23:13:44.857+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup policy-pap | [2024-01-30T23:13:44.868+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-30T23:13:44Z, user=policyadmin)] policy-pap | [2024-01-30T23:14:05.451+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup policy-pap | [2024-01-30T23:14:05.453+00:00|INFO|SessionData|http-nio-6969-exec-2] deleting DB group testGroup policy-pap | [2024-01-30T23:14:06.640+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=95617564-9902-46ea-a031-5c473077bc58, expireMs=1706656446640] policy-pap | [2024-01-30T23:14:06.722+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=845e0731-24c1-4793-a94a-51d784453a0e, expireMs=1706656446722] policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 grafana | logger=migrator t=2024-01-30T23:12:41.096731682Z level=info msg="Executing migration" id="add column for to alert_rule" grafana | logger=migrator t=2024-01-30T23:12:41.105292271Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.561549ms grafana | logger=migrator t=2024-01-30T23:12:41.109869399Z level=info msg="Executing migration" id="add column annotations to alert_rule" grafana | logger=migrator t=2024-01-30T23:12:41.116238983Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.369124ms grafana | logger=migrator t=2024-01-30T23:12:41.119535487Z level=info msg="Executing migration" id="add column labels to alert_rule" grafana | logger=migrator t=2024-01-30T23:12:41.125774997Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.24013ms grafana | logger=migrator t=2024-01-30T23:12:41.128932678Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" grafana | logger=migrator t=2024-01-30T23:12:41.130503899Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.566ms grafana | logger=migrator t=2024-01-30T23:12:41.135471226Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" grafana | logger=migrator t=2024-01-30T23:12:41.136639386Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.16649ms grafana | logger=migrator t=2024-01-30T23:12:41.142054145Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" grafana | logger=migrator t=2024-01-30T23:12:41.152019021Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=10.005497ms grafana | logger=migrator t=2024-01-30T23:12:41.155668664Z level=info msg="Executing migration" id="add panel_id column to alert_rule" grafana | logger=migrator t=2024-01-30T23:12:41.161660958Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.991794ms grafana | logger=migrator t=2024-01-30T23:12:41.165945029Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" grafana | logger=migrator t=2024-01-30T23:12:41.167042566Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.097247ms grafana | logger=migrator t=2024-01-30T23:12:41.170447793Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" grafana | logger=migrator t=2024-01-30T23:12:41.176332395Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.884252ms grafana | logger=migrator t=2024-01-30T23:12:41.1796552Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" grafana | logger=migrator t=2024-01-30T23:12:41.185480989Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.825359ms grafana | logger=migrator t=2024-01-30T23:12:41.189804151Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" grafana | logger=migrator t=2024-01-30T23:12:41.189971665Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=167.695µs policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:47 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:48 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 grafana | logger=migrator t=2024-01-30T23:12:41.193354571Z level=info msg="Executing migration" id="create alert_rule_version table" grafana | logger=migrator t=2024-01-30T23:12:41.19444923Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.091269ms grafana | logger=migrator t=2024-01-30T23:12:41.198208656Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" grafana | logger=migrator t=2024-01-30T23:12:41.200083315Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.873969ms grafana | logger=migrator t=2024-01-30T23:12:41.204506368Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" grafana | logger=migrator t=2024-01-30T23:12:41.205706899Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.199861ms grafana | logger=migrator t=2024-01-30T23:12:41.208950132Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" grafana | logger=migrator t=2024-01-30T23:12:41.209092155Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=141.623µs grafana | logger=migrator t=2024-01-30T23:12:41.212288408Z level=info msg="Executing migration" id="add column for to alert_rule_version" grafana | logger=migrator t=2024-01-30T23:12:41.218905677Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.61566ms grafana | logger=migrator t=2024-01-30T23:12:41.223389952Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" grafana | logger=migrator t=2024-01-30T23:12:41.229751426Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.363693ms grafana | logger=migrator t=2024-01-30T23:12:41.232614089Z level=info msg="Executing migration" id="add column labels to alert_rule_version" grafana | logger=migrator t=2024-01-30T23:12:41.238164042Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.547622ms grafana | logger=migrator t=2024-01-30T23:12:41.241397394Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" grafana | logger=migrator t=2024-01-30T23:12:41.24784087Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.442786ms grafana | logger=migrator t=2024-01-30T23:12:41.252050188Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" grafana | logger=migrator t=2024-01-30T23:12:41.258202185Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.151397ms grafana | logger=migrator t=2024-01-30T23:12:41.261301086Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" grafana | logger=migrator t=2024-01-30T23:12:41.261425589Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=124.223µs grafana | logger=migrator t=2024-01-30T23:12:41.264684103Z level=info msg="Executing migration" id=create_alert_configuration_table grafana | logger=migrator t=2024-01-30T23:12:41.266190071Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.505578ms grafana | logger=migrator t=2024-01-30T23:12:41.269889646Z level=info msg="Executing migration" id="Add column default in alert_configuration" grafana | logger=migrator t=2024-01-30T23:12:41.278051326Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.16192ms grafana | logger=migrator t=2024-01-30T23:12:41.281107224Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" grafana | logger=migrator t=2024-01-30T23:12:41.281184166Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=77.022µs grafana | logger=migrator t=2024-01-30T23:12:41.284207554Z level=info msg="Executing migration" id="add column org_id in alert_configuration" grafana | logger=migrator t=2024-01-30T23:12:41.290918036Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.704932ms grafana | logger=migrator t=2024-01-30T23:12:41.296434877Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" grafana | logger=migrator t=2024-01-30T23:12:41.298676915Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=2.238988ms grafana | logger=migrator t=2024-01-30T23:12:41.302236036Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" grafana | logger=migrator t=2024-01-30T23:12:41.308537247Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.301051ms grafana | logger=migrator t=2024-01-30T23:12:41.311580937Z level=info msg="Executing migration" id=create_ngalert_configuration_table grafana | logger=migrator t=2024-01-30T23:12:41.312074279Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=490.353µs grafana | logger=migrator t=2024-01-30T23:12:41.316884142Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" grafana | logger=migrator t=2024-01-30T23:12:41.317592451Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=707.859µs grafana | logger=migrator t=2024-01-30T23:12:41.320590388Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" grafana | logger=migrator t=2024-01-30T23:12:41.328993673Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=8.404216ms grafana | logger=migrator t=2024-01-30T23:12:41.331925168Z level=info msg="Executing migration" id="create provenance_type table" grafana | logger=migrator t=2024-01-30T23:12:41.332748599Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=823.191µs grafana | logger=migrator t=2024-01-30T23:12:41.33783632Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" grafana | logger=migrator t=2024-01-30T23:12:41.338570279Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=735.439µs grafana | logger=migrator t=2024-01-30T23:12:41.342790638Z level=info msg="Executing migration" id="create alert_image table" grafana | logger=migrator t=2024-01-30T23:12:41.343924426Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.133129ms grafana | logger=migrator t=2024-01-30T23:12:41.347149119Z level=info msg="Executing migration" id="add unique index on token to alert_image table" grafana | logger=migrator t=2024-01-30T23:12:41.348646758Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.496949ms grafana | logger=migrator t=2024-01-30T23:12:41.353748129Z level=info msg="Executing migration" id="support longer URLs in alert_image table" grafana | logger=migrator t=2024-01-30T23:12:41.353853592Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=105.713µs policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 3001242312460800u 1 2024-01-30 23:12:49 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:49 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:49 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:49 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:49 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:49 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:49 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:50 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:50 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:50 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:50 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:50 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:50 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 3001242312460900u 1 2024-01-30 23:12:50 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 grafana | logger=migrator t=2024-01-30T23:12:41.356798517Z level=info msg="Executing migration" id=create_alert_configuration_history_table grafana | logger=migrator t=2024-01-30T23:12:41.357652119Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=853.232µs grafana | logger=migrator t=2024-01-30T23:12:41.360709497Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" grafana | logger=migrator t=2024-01-30T23:12:41.361662291Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=951.794µs grafana | logger=migrator t=2024-01-30T23:12:41.366904996Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-01-30T23:12:41.367539922Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" grafana | logger=migrator t=2024-01-30T23:12:41.370662163Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" grafana | logger=migrator t=2024-01-30T23:12:41.371098794Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=436.471µs grafana | logger=migrator t=2024-01-30T23:12:41.37405449Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" grafana | logger=migrator t=2024-01-30T23:12:41.376227586Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=2.171056ms grafana | logger=migrator t=2024-01-30T23:12:41.381340146Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" grafana | logger=migrator t=2024-01-30T23:12:41.3880836Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.743134ms grafana | logger=migrator t=2024-01-30T23:12:41.391056276Z level=info msg="Executing migration" id="create library_element table v1" grafana | logger=migrator t=2024-01-30T23:12:41.392140814Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.083887ms grafana | logger=migrator t=2024-01-30T23:12:41.395229653Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" grafana | logger=migrator t=2024-01-30T23:12:41.396414364Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.184221ms grafana | logger=migrator t=2024-01-30T23:12:41.401463184Z level=info msg="Executing migration" id="create library_element_connection table v1" grafana | logger=migrator t=2024-01-30T23:12:41.402388377Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=924.423µs grafana | logger=migrator t=2024-01-30T23:12:41.405308202Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" grafana | logger=migrator t=2024-01-30T23:12:41.406517952Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.20924ms grafana | logger=migrator t=2024-01-30T23:12:41.409400046Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" grafana | logger=migrator t=2024-01-30T23:12:41.410564637Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.164131ms grafana | logger=migrator t=2024-01-30T23:12:41.415512084Z level=info msg="Executing migration" id="increase max description length to 2048" grafana | logger=migrator t=2024-01-30T23:12:41.415543775Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=32.541µs grafana | logger=migrator t=2024-01-30T23:12:41.418000027Z level=info msg="Executing migration" id="alter library_element model to mediumtext" grafana | logger=migrator t=2024-01-30T23:12:41.418129991Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=130.154µs grafana | logger=migrator t=2024-01-30T23:12:41.421233901Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" grafana | logger=migrator t=2024-01-30T23:12:41.421840256Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=607.905µs grafana | logger=migrator t=2024-01-30T23:12:41.425248354Z level=info msg="Executing migration" id="create data_keys table" grafana | logger=migrator t=2024-01-30T23:12:41.426814984Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.56565ms grafana | logger=migrator t=2024-01-30T23:12:41.431132694Z level=info msg="Executing migration" id="create secrets table" grafana | logger=migrator t=2024-01-30T23:12:41.431944565Z level=info msg="Migration successfully executed" id="create secrets table" duration=811.471µs grafana | logger=migrator t=2024-01-30T23:12:41.435027355Z level=info msg="Executing migration" id="rename data_keys name column to id" grafana | logger=migrator t=2024-01-30T23:12:41.481025625Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=45.99622ms grafana | logger=migrator t=2024-01-30T23:12:41.484298079Z level=info msg="Executing migration" id="add name column into data_keys" grafana | logger=migrator t=2024-01-30T23:12:41.491426833Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=7.127853ms grafana | logger=migrator t=2024-01-30T23:12:41.495552148Z level=info msg="Executing migration" id="copy data_keys id column values into name" grafana | logger=migrator t=2024-01-30T23:12:41.495723492Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=169.644µs grafana | logger=migrator t=2024-01-30T23:12:41.498177775Z level=info msg="Executing migration" id="rename data_keys name column to label" grafana | logger=migrator t=2024-01-30T23:12:41.543583851Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=45.404805ms grafana | logger=migrator t=2024-01-30T23:12:41.546987019Z level=info msg="Executing migration" id="rename data_keys id column back to name" grafana | logger=migrator t=2024-01-30T23:12:41.592055155Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=45.064546ms grafana | logger=migrator t=2024-01-30T23:12:41.596290363Z level=info msg="Executing migration" id="create kv_store table v1" grafana | logger=migrator t=2024-01-30T23:12:41.59693449Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=643.747µs grafana | logger=migrator t=2024-01-30T23:12:41.600041509Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" grafana | logger=migrator t=2024-01-30T23:12:41.600839741Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=797.612µs grafana | logger=migrator t=2024-01-30T23:12:41.60394343Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" grafana | logger=migrator t=2024-01-30T23:12:41.604401452Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=456.672µs grafana | logger=migrator t=2024-01-30T23:12:41.607971913Z level=info msg="Executing migration" id="create permission table" grafana | logger=migrator t=2024-01-30T23:12:41.609367669Z level=info msg="Migration successfully executed" id="create permission table" duration=1.393586ms kafka | [2024-01-30 23:13:16,192] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,192] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,199] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,199] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,199] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,199] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,200] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,206] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,206] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,206] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,207] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,207] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,212] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,213] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,213] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,213] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,213] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,219] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,220] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,220] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,220] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,220] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,228] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,229] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,229] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,229] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,229] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,234] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,235] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,235] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,235] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,235] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,246] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,246] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,247] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 3001242312461000u 1 2024-01-30 23:12:50 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 3001242312461100u 1 2024-01-30 23:12:50 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 3001242312461200u 1 2024-01-30 23:12:50 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 3001242312461200u 1 2024-01-30 23:12:50 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 3001242312461200u 1 2024-01-30 23:12:50 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 3001242312461200u 1 2024-01-30 23:12:50 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 3001242312461300u 1 2024-01-30 23:12:50 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 3001242312461300u 1 2024-01-30 23:12:50 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 3001242312461300u 1 2024-01-30 23:12:50 policy-db-migrator | policyadmin: OK @ 1300 grafana | logger=migrator t=2024-01-30T23:12:41.615126587Z level=info msg="Executing migration" id="add unique index permission.role_id" grafana | logger=migrator t=2024-01-30T23:12:41.616144193Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.017296ms grafana | logger=migrator t=2024-01-30T23:12:41.619505739Z level=info msg="Executing migration" id="add unique index role_id_action_scope" grafana | logger=migrator t=2024-01-30T23:12:41.621209403Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.703314ms grafana | logger=migrator t=2024-01-30T23:12:41.625861183Z level=info msg="Executing migration" id="create role table" grafana | logger=migrator t=2024-01-30T23:12:41.627185937Z level=info msg="Migration successfully executed" id="create role table" duration=1.325834ms grafana | logger=migrator t=2024-01-30T23:12:41.630662285Z level=info msg="Executing migration" id="add column display_name" grafana | logger=migrator t=2024-01-30T23:12:41.637784269Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.121504ms grafana | logger=migrator t=2024-01-30T23:12:41.641195156Z level=info msg="Executing migration" id="add column group_name" grafana | logger=migrator t=2024-01-30T23:12:41.648411162Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.213476ms grafana | logger=migrator t=2024-01-30T23:12:41.651605044Z level=info msg="Executing migration" id="add index role.org_id" grafana | logger=migrator t=2024-01-30T23:12:41.652708482Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.103518ms grafana | logger=migrator t=2024-01-30T23:12:41.656691144Z level=info msg="Executing migration" id="add unique index role_org_id_name" grafana | logger=migrator t=2024-01-30T23:12:41.658032068Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.339344ms grafana | logger=migrator t=2024-01-30T23:12:41.66159438Z level=info msg="Executing migration" id="add index role_org_id_uid" grafana | logger=migrator t=2024-01-30T23:12:41.663368335Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.771906ms grafana | logger=migrator t=2024-01-30T23:12:41.666918636Z level=info msg="Executing migration" id="create team role table" grafana | logger=migrator t=2024-01-30T23:12:41.667684087Z level=info msg="Migration successfully executed" id="create team role table" duration=765.14µs grafana | logger=migrator t=2024-01-30T23:12:41.671500044Z level=info msg="Executing migration" id="add index team_role.org_id" grafana | logger=migrator t=2024-01-30T23:12:41.672602032Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.101658ms grafana | logger=migrator t=2024-01-30T23:12:41.675666661Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" grafana | logger=migrator t=2024-01-30T23:12:41.676861512Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.192691ms grafana | logger=migrator t=2024-01-30T23:12:41.680802433Z level=info msg="Executing migration" id="add index team_role.team_id" grafana | logger=migrator t=2024-01-30T23:12:41.68185864Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.055868ms grafana | logger=migrator t=2024-01-30T23:12:41.685140265Z level=info msg="Executing migration" id="create user role table" grafana | logger=migrator t=2024-01-30T23:12:41.685889104Z level=info msg="Migration successfully executed" id="create user role table" duration=746.969µs grafana | logger=migrator t=2024-01-30T23:12:41.689071305Z level=info msg="Executing migration" id="add index user_role.org_id" grafana | logger=migrator t=2024-01-30T23:12:41.690059141Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=987.516µs grafana | logger=migrator t=2024-01-30T23:12:41.69434177Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" grafana | logger=migrator t=2024-01-30T23:12:41.696115996Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.772676ms grafana | logger=migrator t=2024-01-30T23:12:41.699734939Z level=info msg="Executing migration" id="add index user_role.user_id" grafana | logger=migrator t=2024-01-30T23:12:41.701561336Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.822967ms grafana | logger=migrator t=2024-01-30T23:12:41.706675467Z level=info msg="Executing migration" id="create builtin role table" grafana | logger=migrator t=2024-01-30T23:12:41.707487998Z level=info msg="Migration successfully executed" id="create builtin role table" duration=812.211µs grafana | logger=migrator t=2024-01-30T23:12:41.710742501Z level=info msg="Executing migration" id="add index builtin_role.role_id" grafana | logger=migrator t=2024-01-30T23:12:41.711757027Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.013966ms grafana | logger=migrator t=2024-01-30T23:12:41.715872923Z level=info msg="Executing migration" id="add index builtin_role.name" grafana | logger=migrator t=2024-01-30T23:12:41.716872188Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=999.175µs grafana | logger=migrator t=2024-01-30T23:12:41.719979678Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" grafana | logger=migrator t=2024-01-30T23:12:41.727461531Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.481543ms grafana | logger=migrator t=2024-01-30T23:12:41.730477928Z level=info msg="Executing migration" id="add index builtin_role.org_id" grafana | logger=migrator t=2024-01-30T23:12:41.731207296Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=728.848µs grafana | logger=migrator t=2024-01-30T23:12:41.734680866Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" grafana | logger=migrator t=2024-01-30T23:12:41.735425115Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=743.579µs grafana | logger=migrator t=2024-01-30T23:12:41.73833062Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" grafana | logger=migrator t=2024-01-30T23:12:41.739301564Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=968.194µs grafana | logger=migrator t=2024-01-30T23:12:41.742132717Z level=info msg="Executing migration" id="add unique index role.uid" grafana | logger=migrator t=2024-01-30T23:12:41.743121403Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=988.516µs grafana | logger=migrator t=2024-01-30T23:12:41.747172446Z level=info msg="Executing migration" id="create seed assignment table" grafana | logger=migrator t=2024-01-30T23:12:41.747856824Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=682.458µs grafana | logger=migrator t=2024-01-30T23:12:41.750783219Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" grafana | logger=migrator t=2024-01-30T23:12:41.751800256Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.016607ms grafana | logger=migrator t=2024-01-30T23:12:41.75509997Z level=info msg="Executing migration" id="add column hidden to role table" grafana | logger=migrator t=2024-01-30T23:12:41.763403503Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.302813ms grafana | logger=migrator t=2024-01-30T23:12:41.76759884Z level=info msg="Executing migration" id="permission kind migration" grafana | logger=migrator t=2024-01-30T23:12:41.77340935Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.80969ms grafana | logger=migrator t=2024-01-30T23:12:41.777592597Z level=info msg="Executing migration" id="permission attribute migration" grafana | logger=migrator t=2024-01-30T23:12:41.789351239Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=11.759802ms grafana | logger=migrator t=2024-01-30T23:12:41.793686291Z level=info msg="Executing migration" id="permission identifier migration" grafana | logger=migrator t=2024-01-30T23:12:41.802758183Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.072443ms grafana | logger=migrator t=2024-01-30T23:12:41.80886368Z level=info msg="Executing migration" id="add permission identifier index" grafana | logger=migrator t=2024-01-30T23:12:41.809622679Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=759.759µs grafana | logger=migrator t=2024-01-30T23:12:41.812495853Z level=info msg="Executing migration" id="create query_history table v1" grafana | logger=migrator t=2024-01-30T23:12:41.81393361Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.437367ms grafana | logger=migrator t=2024-01-30T23:12:41.819133824Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" grafana | logger=migrator t=2024-01-30T23:12:41.820918419Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.783415ms grafana | logger=migrator t=2024-01-30T23:12:41.823655509Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" grafana | logger=migrator t=2024-01-30T23:12:41.823718621Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=64.172µs grafana | logger=migrator t=2024-01-30T23:12:41.826036011Z level=info msg="Executing migration" id="rbac disabled migrator" grafana | logger=migrator t=2024-01-30T23:12:41.826068821Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=32.811µs grafana | logger=migrator t=2024-01-30T23:12:41.828953225Z level=info msg="Executing migration" id="teams permissions migration" grafana | logger=migrator t=2024-01-30T23:12:41.829638033Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=685.497µs grafana | logger=migrator t=2024-01-30T23:12:41.834647201Z level=info msg="Executing migration" id="dashboard permissions" grafana | logger=migrator t=2024-01-30T23:12:41.835513084Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=866.843µs grafana | logger=migrator t=2024-01-30T23:12:41.8384704Z level=info msg="Executing migration" id="dashboard permissions uid scopes" grafana | logger=migrator t=2024-01-30T23:12:41.839088146Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=615.576µs grafana | logger=migrator t=2024-01-30T23:12:41.841632451Z level=info msg="Executing migration" id="drop managed folder create actions" grafana | logger=migrator t=2024-01-30T23:12:41.841817216Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=185.125µs grafana | logger=migrator t=2024-01-30T23:12:41.844490934Z level=info msg="Executing migration" id="alerting notification permissions" grafana | logger=migrator t=2024-01-30T23:12:41.844780442Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=289.848µs grafana | logger=migrator t=2024-01-30T23:12:41.8497552Z level=info msg="Executing migration" id="create query_history_star table v1" grafana | logger=migrator t=2024-01-30T23:12:41.85054603Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=790.66µs grafana | logger=migrator t=2024-01-30T23:12:41.852979712Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" grafana | logger=migrator t=2024-01-30T23:12:41.854118701Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.138249ms grafana | logger=migrator t=2024-01-30T23:12:41.856902843Z level=info msg="Executing migration" id="add column org_id in query_history_star" grafana | logger=migrator t=2024-01-30T23:12:41.864258092Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=7.35527ms grafana | logger=migrator t=2024-01-30T23:12:41.867831983Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" grafana | logger=migrator t=2024-01-30T23:12:41.867876714Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=45.041µs grafana | logger=migrator t=2024-01-30T23:12:41.869846045Z level=info msg="Executing migration" id="create correlation table v1" grafana | logger=migrator t=2024-01-30T23:12:41.870499542Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=653.007µs grafana | logger=migrator t=2024-01-30T23:12:41.873081668Z level=info msg="Executing migration" id="add index correlations.uid" grafana | logger=migrator t=2024-01-30T23:12:41.874299579Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.217441ms grafana | logger=migrator t=2024-01-30T23:12:41.879644926Z level=info msg="Executing migration" id="add index correlations.source_uid" grafana | logger=migrator t=2024-01-30T23:12:41.880760805Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.115389ms grafana | logger=migrator t=2024-01-30T23:12:41.883606668Z level=info msg="Executing migration" id="add correlation config column" grafana | logger=migrator t=2024-01-30T23:12:41.892108187Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.500729ms grafana | logger=migrator t=2024-01-30T23:12:41.895148815Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" grafana | logger=migrator t=2024-01-30T23:12:41.896212371Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.065277ms grafana | logger=migrator t=2024-01-30T23:12:41.900945303Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" kafka | [2024-01-30 23:13:16,247] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,247] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,253] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,253] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,253] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,253] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,254] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,260] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,261] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,261] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,261] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,261] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,266] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) kafka | [2024-01-30 23:13:16,266] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) kafka | [2024-01-30 23:13:16,266] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,266] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) kafka | [2024-01-30 23:13:16,266] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(k7KpSrR8TmGhJQ-7sqVboQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) grafana | logger=migrator t=2024-01-30T23:12:41.902016311Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.071007ms grafana | logger=migrator t=2024-01-30T23:12:41.905012687Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" grafana | logger=migrator t=2024-01-30T23:12:41.94054106Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=35.527923ms grafana | logger=migrator t=2024-01-30T23:12:41.9444807Z level=info msg="Executing migration" id="create correlation v2" grafana | logger=migrator t=2024-01-30T23:12:41.945117697Z level=info msg="Migration successfully executed" id="create correlation v2" duration=636.587µs grafana | logger=migrator t=2024-01-30T23:12:41.948174626Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" grafana | logger=migrator t=2024-01-30T23:12:41.949373636Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.19855ms kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) kafka | [2024-01-30 23:13:16,270] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) kafka | [2024-01-30 23:13:16,271] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) kafka | [2024-01-30 23:13:16,280] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,281] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,282] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,282] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,282] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-30T23:12:41.952358563Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" grafana | logger=migrator t=2024-01-30T23:12:41.953556994Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.196091ms grafana | logger=migrator t=2024-01-30T23:12:41.95731471Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" grafana | logger=migrator t=2024-01-30T23:12:41.958415708Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.100548ms grafana | logger=migrator t=2024-01-30T23:12:41.961537379Z level=info msg="Executing migration" id="copy correlation v1 to v2" grafana | logger=migrator t=2024-01-30T23:12:41.961782705Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=245.216µs grafana | logger=migrator t=2024-01-30T23:12:41.964994408Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" grafana | logger=migrator t=2024-01-30T23:12:41.965782667Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=788.2µs grafana | logger=migrator t=2024-01-30T23:12:41.969697787Z level=info msg="Executing migration" id="add provisioning column" grafana | logger=migrator t=2024-01-30T23:12:41.978246377Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.54852ms grafana | logger=migrator t=2024-01-30T23:12:41.981058479Z level=info msg="Executing migration" id="create entity_events table" grafana | logger=migrator t=2024-01-30T23:12:41.98184885Z level=info msg="Migration successfully executed" id="create entity_events table" duration=790.161µs grafana | logger=migrator t=2024-01-30T23:12:41.984892548Z level=info msg="Executing migration" id="create dashboard public config v1" grafana | logger=migrator t=2024-01-30T23:12:41.985810031Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=917.193µs grafana | logger=migrator t=2024-01-30T23:12:41.990606775Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-01-30T23:12:41.991060896Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-01-30T23:12:41.994237607Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-01-30T23:12:41.994702229Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-01-30T23:12:41.997617944Z level=info msg="Executing migration" id="Drop old dashboard public config table" grafana | logger=migrator t=2024-01-30T23:12:41.998404205Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=785.731µs grafana | logger=migrator t=2024-01-30T23:12:42.003153306Z level=info msg="Executing migration" id="recreate dashboard public config v1" grafana | logger=migrator t=2024-01-30T23:12:42.00405426Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=900.574µs grafana | logger=migrator t=2024-01-30T23:12:42.007016136Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" grafana | logger=migrator t=2024-01-30T23:12:42.008109244Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.092447ms grafana | logger=migrator t=2024-01-30T23:12:42.010976217Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" grafana | logger=migrator t=2024-01-30T23:12:42.012105337Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.13037ms grafana | logger=migrator t=2024-01-30T23:12:42.015832582Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-01-30T23:12:42.016882539Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.048077ms grafana | logger=migrator t=2024-01-30T23:12:42.019823894Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-01-30T23:12:42.020877782Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.053648ms grafana | logger=migrator t=2024-01-30T23:12:42.024830743Z level=info msg="Executing migration" id="Drop public config table" grafana | logger=migrator t=2024-01-30T23:12:42.025649733Z level=info msg="Migration successfully executed" id="Drop public config table" duration=816.3µs grafana | logger=migrator t=2024-01-30T23:12:42.028719013Z level=info msg="Executing migration" id="Recreate dashboard public config v2" grafana | logger=migrator t=2024-01-30T23:12:42.029744649Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.025326ms grafana | logger=migrator t=2024-01-30T23:12:42.032609903Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" grafana | logger=migrator t=2024-01-30T23:12:42.03368841Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.078077ms grafana | logger=migrator t=2024-01-30T23:12:42.037450757Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" grafana | logger=migrator t=2024-01-30T23:12:42.038567355Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.115578ms grafana | logger=migrator t=2024-01-30T23:12:42.04189017Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" grafana | logger=migrator t=2024-01-30T23:12:42.042937228Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.046838ms grafana | logger=migrator t=2024-01-30T23:12:42.046725765Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" grafana | logger=migrator t=2024-01-30T23:12:42.083423266Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=36.696491ms grafana | logger=migrator t=2024-01-30T23:12:42.086470874Z level=info msg="Executing migration" id="add annotations_enabled column" grafana | logger=migrator t=2024-01-30T23:12:42.094969052Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.497238ms grafana | logger=migrator t=2024-01-30T23:12:42.099375536Z level=info msg="Executing migration" id="add time_selection_enabled column" grafana | logger=migrator t=2024-01-30T23:12:42.1054098Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.032854ms grafana | logger=migrator t=2024-01-30T23:12:42.108372866Z level=info msg="Executing migration" id="delete orphaned public dashboards" grafana | logger=migrator t=2024-01-30T23:12:42.108618212Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=242.846µs grafana | logger=migrator t=2024-01-30T23:12:42.112521502Z level=info msg="Executing migration" id="add share column" grafana | logger=migrator t=2024-01-30T23:12:42.121446302Z level=info msg="Migration successfully executed" id="add share column" duration=8.92407ms kafka | [2024-01-30 23:13:16,282] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,283] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,283] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,284] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,284] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,285] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,285] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-30T23:12:42.124368377Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" grafana | logger=migrator t=2024-01-30T23:12:42.124530221Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=161.934µs grafana | logger=migrator t=2024-01-30T23:12:42.127420655Z level=info msg="Executing migration" id="create file table" grafana | logger=migrator t=2024-01-30T23:12:42.12800909Z level=info msg="Migration successfully executed" id="create file table" duration=590.205µs grafana | logger=migrator t=2024-01-30T23:12:42.131843138Z level=info msg="Executing migration" id="file table idx: path natural pk" grafana | logger=migrator t=2024-01-30T23:12:42.133537642Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.693484ms grafana | logger=migrator t=2024-01-30T23:12:42.136833916Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" grafana | logger=migrator t=2024-01-30T23:12:42.138541001Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.706694ms grafana | logger=migrator t=2024-01-30T23:12:42.141703561Z level=info msg="Executing migration" id="create file_meta table" grafana | logger=migrator t=2024-01-30T23:12:42.142466841Z level=info msg="Migration successfully executed" id="create file_meta table" duration=762.83µs grafana | logger=migrator t=2024-01-30T23:12:42.146196687Z level=info msg="Executing migration" id="file table idx: path key" grafana | logger=migrator t=2024-01-30T23:12:42.147399318Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.199342ms grafana | logger=migrator t=2024-01-30T23:12:42.151692437Z level=info msg="Executing migration" id="set path collation in file table" grafana | logger=migrator t=2024-01-30T23:12:42.151753759Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=61.962µs grafana | logger=migrator t=2024-01-30T23:12:42.154938221Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" grafana | logger=migrator t=2024-01-30T23:12:42.155003663Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=66.082µs grafana | logger=migrator t=2024-01-30T23:12:42.157769674Z level=info msg="Executing migration" id="managed permissions migration" grafana | logger=migrator t=2024-01-30T23:12:42.158653606Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=883.802µs grafana | logger=migrator t=2024-01-30T23:12:42.162664789Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" grafana | logger=migrator t=2024-01-30T23:12:42.162973367Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=308.828µs grafana | logger=migrator t=2024-01-30T23:12:42.165983105Z level=info msg="Executing migration" id="RBAC action name migrator" grafana | logger=migrator t=2024-01-30T23:12:42.166781935Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=798.821µs grafana | logger=migrator t=2024-01-30T23:12:42.169475524Z level=info msg="Executing migration" id="Add UID column to playlist" grafana | logger=migrator t=2024-01-30T23:12:42.178543677Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.068443ms grafana | logger=migrator t=2024-01-30T23:12:42.182772605Z level=info msg="Executing migration" id="Update uid column values in playlist" grafana | logger=migrator t=2024-01-30T23:12:42.18292957Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=157.205µs grafana | logger=migrator t=2024-01-30T23:12:42.185399942Z level=info msg="Executing migration" id="Add index for uid in playlist" grafana | logger=migrator t=2024-01-30T23:12:42.18647655Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.075988ms grafana | logger=migrator t=2024-01-30T23:12:42.189302723Z level=info msg="Executing migration" id="update group index for alert rules" grafana | logger=migrator t=2024-01-30T23:12:42.189672502Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=370.269µs grafana | logger=migrator t=2024-01-30T23:12:42.192399882Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" grafana | logger=migrator t=2024-01-30T23:12:42.192608388Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=208.346µs grafana | logger=migrator t=2024-01-30T23:12:42.196304192Z level=info msg="Executing migration" id="admin only folder/dashboard permission" grafana | logger=migrator t=2024-01-30T23:12:42.196774694Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=466.402µs grafana | logger=migrator t=2024-01-30T23:12:42.199844494Z level=info msg="Executing migration" id="add action column to seed_assignment" grafana | logger=migrator t=2024-01-30T23:12:42.208713511Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.868597ms grafana | logger=migrator t=2024-01-30T23:12:42.21178079Z level=info msg="Executing migration" id="add scope column to seed_assignment" grafana | logger=migrator t=2024-01-30T23:12:42.217872146Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=6.090376ms grafana | logger=migrator t=2024-01-30T23:12:42.220919054Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" grafana | logger=migrator t=2024-01-30T23:12:42.222041783Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.122669ms grafana | logger=migrator t=2024-01-30T23:12:42.225979744Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" grafana | logger=migrator t=2024-01-30T23:12:42.331618295Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=105.633341ms grafana | logger=migrator t=2024-01-30T23:12:42.334998611Z level=info msg="Executing migration" id="add unique index builtin_role_name back" grafana | logger=migrator t=2024-01-30T23:12:42.335857643Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=857.072µs grafana | logger=migrator t=2024-01-30T23:12:42.339954769Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" grafana | logger=migrator t=2024-01-30T23:12:42.341134499Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.179129ms grafana | logger=migrator t=2024-01-30T23:12:42.344014842Z level=info msg="Executing migration" id="add primary key to seed_assigment" grafana | logger=migrator t=2024-01-30T23:12:42.37861761Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=34.601088ms grafana | logger=migrator t=2024-01-30T23:12:42.382052099Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" grafana | logger=migrator t=2024-01-30T23:12:42.382235483Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=183.324µs grafana | logger=migrator t=2024-01-30T23:12:42.385865666Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,286] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,286] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,287] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,287] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,287] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,287] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,287] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,287] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,289] INFO [Broker id=1] Finished LeaderAndIsr request in 521ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-01-30 23:13:16,291] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,292] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,292] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,292] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,293] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,293] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,293] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,294] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,294] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,295] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=k7KpSrR8TmGhJQ-7sqVboQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=B6KsyJDSTOqeYl8_kE1bXQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-30 23:13:16,297] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 14 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,297] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,297] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,298] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,299] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,300] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,301] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) grafana | logger=migrator t=2024-01-30T23:12:42.38601389Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=148.334µs grafana | logger=migrator t=2024-01-30T23:12:42.388593746Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" grafana | logger=migrator t=2024-01-30T23:12:42.38874391Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=150.144µs grafana | logger=migrator t=2024-01-30T23:12:42.392011014Z level=info msg="Executing migration" id="create folder table" grafana | logger=migrator t=2024-01-30T23:12:42.393369309Z level=info msg="Migration successfully executed" id="create folder table" duration=1.357975ms grafana | logger=migrator t=2024-01-30T23:12:42.397695159Z level=info msg="Executing migration" id="Add index for parent_uid" grafana | logger=migrator t=2024-01-30T23:12:42.39888104Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.187421ms grafana | logger=migrator t=2024-01-30T23:12:42.401803945Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-01-30T23:12:42.403002646Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.198281ms grafana | logger=migrator t=2024-01-30T23:12:42.406168827Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-01-30T23:12:42.406193017Z level=info msg="Migration successfully executed" id="Update folder title length" duration=25.05µs grafana | logger=migrator t=2024-01-30T23:12:42.409989706Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-01-30T23:12:42.411153145Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.163089ms grafana | logger=migrator t=2024-01-30T23:12:42.414312456Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-01-30T23:12:42.415386514Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.073698ms grafana | logger=migrator t=2024-01-30T23:12:42.418591656Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-01-30T23:12:42.419729365Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.137139ms grafana | logger=migrator t=2024-01-30T23:12:42.423669606Z level=info msg="Executing migration" id="Sync dashboard and folder table" grafana | logger=migrator t=2024-01-30T23:12:42.424116858Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=445.062µs grafana | logger=migrator t=2024-01-30T23:12:42.427352301Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" grafana | logger=migrator t=2024-01-30T23:12:42.427626548Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=272.437µs grafana | logger=migrator t=2024-01-30T23:12:42.430980334Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-01-30T23:12:42.432280907Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.296263ms grafana | logger=migrator t=2024-01-30T23:12:42.436722581Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-01-30T23:12:42.438583259Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.860198ms grafana | logger=migrator t=2024-01-30T23:12:42.441982036Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-01-30T23:12:42.443130995Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.151159ms grafana | logger=migrator t=2024-01-30T23:12:42.446180114Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-01-30T23:12:42.446984225Z level=info msg="Migration successfully executed" id="create signing_key table" duration=804.29µs grafana | logger=migrator t=2024-01-30T23:12:42.450763881Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-01-30T23:12:42.451942782Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.178471ms grafana | logger=migrator t=2024-01-30T23:12:42.455043121Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-01-30T23:12:42.45616176Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.118839ms grafana | logger=migrator t=2024-01-30T23:12:42.459181167Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-01-30T23:12:42.459465984Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=285.697µs grafana | logger=migrator t=2024-01-30T23:12:42.463727664Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-01-30T23:12:42.473173656Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.442852ms grafana | logger=migrator t=2024-01-30T23:12:42.476082491Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-01-30T23:12:42.476580253Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=496.722µs grafana | logger=migrator t=2024-01-30T23:12:42.479535579Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-01-30T23:12:42.480349401Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=811.492µs grafana | logger=migrator t=2024-01-30T23:12:42.483521092Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-01-30T23:12:42.484471836Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=950.524µs grafana | logger=migrator t=2024-01-30T23:12:42.491276591Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-01-30T23:12:42.492163564Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=887.213µs grafana | logger=migrator t=2024-01-30T23:12:42.495456048Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-01-30T23:12:42.495861628Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=409.27µs grafana | logger=migrator t=2024-01-30T23:12:42.498601358Z level=info msg="migrations completed" performed=526 skipped=0 duration=3.618530964s grafana | logger=sqlstore t=2024-01-30T23:12:42.509129509Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-01-30T23:12:42.509410356Z level=info msg="Created default organization" grafana | logger=secrets t=2024-01-30T23:12:42.514059526Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.store t=2024-01-30T23:12:42.528731292Z level=info msg="Loading plugins..." kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,302] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,303] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,303] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-30 23:13:16,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 19 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,304] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-30 23:13:16,304] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,305] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 20 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,306] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 19 milliseconds for epoch 0, of which 19 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-30 23:13:16,350] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e1d7455a-b29e-4689-b5f7-83ff5acf22d7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,364] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group af90a869-32d4-41c0-900c-5574709c07e7 in Empty state. Created a new member id consumer-af90a869-32d4-41c0-900c-5574709c07e7-3-efedbc9f-87e0-46d1-9edc-7b7ed80c5c82 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,368] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e1d7455a-b29e-4689-b5f7-83ff5acf22d7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,368] INFO [GroupCoordinator 1]: Preparing to rebalance group af90a869-32d4-41c0-900c-5574709c07e7 in state PreparingRebalance with old generation 0 (__consumer_offsets-6) (reason: Adding new member consumer-af90a869-32d4-41c0-900c-5574709c07e7-3-efedbc9f-87e0-46d1-9edc-7b7ed80c5c82 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,804] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 9ff8f2a4-20e4-47ce-9646-2a802e941f7c in Empty state. Created a new member id consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2-09de679e-d476-424e-b6d1-a15a0d620de2 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:16,818] INFO [GroupCoordinator 1]: Preparing to rebalance group 9ff8f2a4-20e4-47ce-9646-2a802e941f7c in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2-09de679e-d476-424e-b6d1-a15a0d620de2 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:19,382] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:19,390] INFO [GroupCoordinator 1]: Stabilized group af90a869-32d4-41c0-900c-5574709c07e7 generation 1 (__consumer_offsets-6) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:19,413] INFO [GroupCoordinator 1]: Assignment received from leader consumer-af90a869-32d4-41c0-900c-5574709c07e7-3-efedbc9f-87e0-46d1-9edc-7b7ed80c5c82 for group af90a869-32d4-41c0-900c-5574709c07e7 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:19,413] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e1d7455a-b29e-4689-b5f7-83ff5acf22d7 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:19,819] INFO [GroupCoordinator 1]: Stabilized group 9ff8f2a4-20e4-47ce-9646-2a802e941f7c generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-30 23:13:19,835] INFO [GroupCoordinator 1]: Assignment received from leader consumer-9ff8f2a4-20e4-47ce-9646-2a802e941f7c-2-09de679e-d476-424e-b6d1-a15a0d620de2 for group 9ff8f2a4-20e4-47ce-9646-2a802e941f7c for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) grafana | logger=local.finder t=2024-01-30T23:12:42.566218843Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-01-30T23:12:42.566274265Z level=info msg="Plugins loaded" count=55 duration=37.544543ms grafana | logger=query_data t=2024-01-30T23:12:42.568652496Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-01-30T23:12:42.572379292Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.migration t=2024-01-30T23:12:42.579514035Z level=info msg=Starting grafana | logger=ngalert.migration orgID=1 t=2024-01-30T23:12:42.580225203Z level=info msg="Migrating alerts for organisation" grafana | logger=ngalert.migration orgID=1 t=2024-01-30T23:12:42.580837989Z level=info msg="Alerts found to migrate" alerts=0 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-30T23:12:42.582335377Z level=info msg="Completed legacy migration" grafana | logger=infra.usagestats.collector t=2024-01-30T23:12:42.610236213Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-01-30T23:12:42.612536022Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-01-30T23:12:42.623170635Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-01-30T23:12:42.623189166Z level=info msg="finished to provision alerting" grafana | logger=grafanaStorageLogger t=2024-01-30T23:12:42.623596366Z level=info msg="Storage starting" grafana | logger=ngalert.state.manager t=2024-01-30T23:12:42.624785686Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.multiorg.alertmanager t=2024-01-30T23:12:42.633828549Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2024-01-30T23:12:42.637116023Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=grafana-apiserver t=2024-01-30T23:12:42.637199825Z level=info msg="Authentication is disabled" grafana | logger=grafana-apiserver t=2024-01-30T23:12:42.642634885Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" grafana | logger=sqlstore.transactions t=2024-01-30T23:12:42.682742103Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=ngalert.state.manager t=2024-01-30T23:12:42.72550201Z level=info msg="State cache has been initialized" states=0 duration=100.714614ms grafana | logger=ngalert.scheduler t=2024-01-30T23:12:42.725530291Z level=info msg="Starting scheduler" tickInterval=10s grafana | logger=ticker t=2024-01-30T23:12:42.725571762Z level=info msg=starting first_tick=2024-01-30T23:12:50Z grafana | logger=plugins.update.checker t=2024-01-30T23:12:42.741377888Z level=info msg="Update check succeeded" duration=118.047609ms grafana | logger=sqlstore.transactions t=2024-01-30T23:12:42.762924361Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=grafana.update.checker t=2024-01-30T23:12:52.643838127Z level=error msg="Update check failed" error="failed to get latest.json repo from github.com: Get \"https://raw.githubusercontent.com/grafana/grafana/main/latest.json\": net/http: TLS handshake timeout" duration=10.020515148s grafana | logger=infra.usagestats t=2024-01-30T23:14:08.632326909Z level=info msg="Usage stats are ready to report" ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping kafka ... Stopping policy-api ... Stopping grafana ... Stopping simulator ... Stopping compose_zookeeper_1 ... Stopping prometheus ... Stopping mariadb ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing grafana ... Removing simulator ... Removing compose_zookeeper_1 ... Removing prometheus ... Removing mariadb ... Removing policy-db-migrator ... done Removing policy-pap ... done Removing policy-apex-pdp ... done Removing compose_zookeeper_1 ... done Removing prometheus ... done Removing policy-api ... done Removing grafana ... done Removing simulator ... done Removing kafka ... done Removing mariadb ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.Kzo1TbVgxn ]] + rsync -av /tmp/tmp.Kzo1TbVgxn/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 910,614 bytes received 95 bytes 1,821,418.00 bytes/sec total size is 910,067 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2827 killed; [ssh-agent] Stopped. Robot results publisher started... INFO: Checking test criticality is deprecated and will be dropped in a future release! -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7692496358369720906.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4460463068272406205.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15101098111444286519.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-tEtV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-tEtV/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4933500933505739886.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config6333962785079571066tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12305967316623159296.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3353158878883339209.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-tEtV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-tEtV/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4730790676457322071.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17504331505297046326.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-tEtV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-tEtV/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins574406257036644255.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-tEtV from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-tEtV/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1556 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-997 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 15G 141G 10% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 903 24561 0 6701 30807 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:c1:27:0e brd ff:ff:ff:ff:ff:ff inet 10.30.107.9/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 76674sec preferred_lft 76674sec inet6 fe80::f816:3eff:fec1:270e/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:7b:a0:e7:93 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-997) 01/30/24 _x86_64_ (8 CPU) 20:34:24 LINUX RESTART (8 CPU) 20:35:02 tps rtps wtps bread/s bwrtn/s 20:36:01 21.96 2.95 19.01 83.24 18490.02 20:37:01 12.45 0.00 12.45 0.00 18023.80 20:38:01 12.16 0.00 12.16 0.00 18017.33 20:39:01 12.56 0.00 12.56 0.00 18158.57 20:40:01 12.48 0.00 12.48 0.00 18023.80 20:41:01 12.35 0.00 12.35 0.00 18022.86 20:42:01 12.40 0.00 12.40 0.00 18155.24 20:43:01 10.30 0.00 10.30 0.00 14529.58 20:44:01 2.67 0.97 1.70 20.66 26.00 20:45:01 1.08 0.00 1.08 0.00 14.93 20:46:01 1.05 0.00 1.05 0.00 12.66 20:47:01 1.12 0.00 1.12 0.00 14.26 20:48:01 0.77 0.00 0.77 0.00 9.73 20:49:01 1.00 0.00 1.00 0.00 12.66 20:50:01 5.93 4.07 1.87 32.53 28.66 20:51:01 0.90 0.00 0.90 0.00 12.40 20:52:01 0.97 0.00 0.97 0.00 11.20 20:53:01 0.85 0.00 0.85 0.00 12.13 20:54:01 0.97 0.00 0.97 0.00 11.46 20:55:01 1.03 0.00 1.03 0.00 14.40 20:56:01 1.03 0.02 1.02 0.13 11.60 20:57:01 1.22 0.00 1.22 0.00 15.73 20:58:01 0.88 0.00 0.88 0.00 10.93 20:59:01 1.10 0.00 1.10 0.00 14.13 21:00:01 0.93 0.00 0.93 0.00 11.46 21:01:01 1.17 0.00 1.17 0.00 15.20 21:02:01 0.83 0.00 0.83 0.00 10.00 21:03:01 1.07 0.00 1.07 0.00 14.53 21:04:01 1.03 0.00 1.03 0.00 12.66 21:05:01 1.00 0.00 1.00 0.00 13.20 21:06:01 0.97 0.00 0.97 0.00 11.46 21:07:01 1.02 0.00 1.02 0.00 13.60 21:08:01 0.87 0.00 0.87 0.00 10.40 21:09:01 1.20 0.00 1.20 0.00 15.33 21:10:01 1.02 0.00 1.02 0.00 12.66 21:11:01 1.03 0.00 1.03 0.00 14.00 21:12:01 0.95 0.00 0.95 0.00 11.46 21:13:01 3.43 2.12 1.32 57.59 17.86 21:14:01 0.88 0.00 0.88 0.00 10.93 21:15:01 1.33 0.00 1.33 0.00 16.53 21:16:01 0.97 0.00 0.97 0.00 12.13 21:17:01 1.32 0.02 1.30 0.13 16.40 21:18:01 1.03 0.00 1.03 0.00 12.66 21:19:01 1.13 0.00 1.13 0.00 13.86 21:20:01 0.93 0.00 0.93 0.00 11.20 21:21:01 1.38 0.00 1.38 0.00 17.46 21:22:01 0.97 0.00 0.97 0.00 11.46 21:23:01 1.85 0.00 1.85 0.00 21.60 21:24:01 1.08 0.00 1.08 0.00 12.93 21:25:01 1.48 0.00 1.48 0.00 18.53 21:26:01 1.43 0.00 1.43 0.00 16.26 21:27:01 1.90 0.00 1.90 0.00 22.26 21:28:01 1.37 0.00 1.37 0.00 15.60 21:29:01 1.55 0.00 1.55 0.00 19.33 21:30:01 1.53 0.00 1.53 0.00 18.40 21:31:01 1.18 0.00 1.18 0.00 14.53 21:32:01 1.49 0.00 1.49 0.00 17.22 21:33:01 1.65 0.00 1.65 0.00 20.13 21:34:01 1.33 0.00 1.33 0.00 15.33 21:35:01 1.92 0.00 1.92 0.00 23.06 21:36:01 1.43 0.00 1.43 0.00 17.73 21:37:01 1.87 0.00 1.87 0.00 21.86 21:38:01 1.33 0.00 1.33 0.00 15.73 21:39:01 1.92 0.00 1.92 0.00 22.93 21:40:01 1.38 0.00 1.38 0.00 15.73 21:41:01 1.55 0.00 1.55 0.00 19.06 21:42:01 1.40 0.00 1.40 0.00 15.86 21:43:01 1.33 0.00 1.33 0.00 16.80 21:44:01 1.28 0.00 1.28 0.00 14.53 21:45:01 1.83 0.00 1.83 0.00 21.86 21:46:01 1.43 0.00 1.43 0.00 16.66 21:47:01 1.93 0.00 1.93 0.00 22.26 21:48:01 1.30 0.00 1.30 0.00 15.33 21:49:01 1.75 0.00 1.75 0.00 21.06 21:50:01 1.33 0.00 1.33 0.00 15.60 21:51:01 1.25 0.00 1.25 0.00 16.13 21:52:01 1.10 0.00 1.10 0.00 12.93 21:53:01 1.22 0.00 1.22 0.00 15.60 21:54:01 1.37 0.00 1.37 0.00 16.00 21:55:01 0.85 0.00 0.85 0.00 11.46 21:56:01 1.55 0.00 1.55 0.00 19.33 21:57:01 1.83 0.00 1.83 0.00 22.00 21:58:01 1.27 0.00 1.27 0.00 16.00 21:59:01 1.45 0.00 1.45 0.00 18.40 22:00:01 0.92 0.00 0.92 0.00 10.80 22:01:01 1.33 0.00 1.33 0.00 17.46 22:02:01 0.93 0.00 0.93 0.00 11.20 22:03:01 1.75 0.00 1.75 0.00 21.33 22:04:01 1.35 0.00 1.35 0.00 16.13 22:05:01 1.63 0.00 1.63 0.00 20.26 22:06:01 1.42 0.00 1.42 0.00 17.46 22:07:01 1.95 0.00 1.95 0.00 23.46 22:08:01 1.43 0.00 1.43 0.00 17.20 22:09:01 1.73 0.00 1.73 0.00 21.73 22:10:01 1.40 0.00 1.40 0.00 16.27 22:11:01 1.70 0.00 1.70 0.00 20.40 22:12:01 1.42 0.00 1.42 0.00 16.26 22:13:01 1.32 0.00 1.32 0.00 17.20 22:14:01 1.02 0.00 1.02 0.00 11.20 22:15:01 0.88 0.00 0.88 0.00 11.86 22:16:01 1.17 0.00 1.17 0.00 15.06 22:17:01 1.63 0.00 1.63 0.00 20.13 22:18:01 1.00 0.00 1.00 0.00 12.00 22:19:01 1.23 0.00 1.23 0.00 16.26 22:20:01 0.95 0.00 0.95 0.00 10.93 22:21:01 1.23 0.00 1.23 0.00 15.60 22:22:01 1.07 0.00 1.07 0.00 12.00 22:23:01 1.22 0.00 1.22 0.00 15.59 22:24:01 1.05 0.00 1.05 0.00 12.80 22:25:01 1.43 0.00 1.43 0.00 17.86 22:26:01 1.12 0.00 1.12 0.00 13.60 22:27:01 1.25 0.00 1.25 0.00 15.33 22:28:01 0.85 0.00 0.85 0.00 11.60 22:29:01 1.32 0.00 1.32 0.00 16.40 22:30:01 0.85 0.00 0.85 0.00 10.26 22:31:01 1.57 0.00 1.57 0.00 19.59 22:32:01 0.92 0.00 0.92 0.00 11.33 22:33:01 1.30 0.00 1.30 0.00 16.53 22:34:01 0.90 0.00 0.90 0.00 11.06 22:35:01 1.12 0.00 1.12 0.00 14.00 22:36:01 1.20 0.00 1.20 0.00 16.40 22:37:01 1.07 0.00 1.07 0.00 13.60 22:38:01 1.05 0.00 1.05 0.00 13.46 22:39:01 1.07 0.00 1.07 0.00 13.06 22:40:01 1.13 0.00 1.13 0.00 14.00 22:41:01 1.10 0.00 1.10 0.00 14.13 22:42:01 1.05 0.00 1.05 0.00 12.53 22:43:01 0.93 0.00 0.93 0.00 11.73 22:44:01 1.17 0.00 1.17 0.00 14.26 22:45:01 1.27 0.00 1.27 0.00 16.66 22:46:01 1.08 0.00 1.08 0.00 13.20 22:47:01 0.97 0.00 0.97 0.00 12.40 22:48:01 1.17 0.00 1.17 0.00 14.40 22:49:01 0.88 0.00 0.88 0.00 12.26 22:50:01 1.20 0.00 1.20 0.00 14.26 22:51:01 1.55 0.00 1.55 0.00 18.80 22:52:01 0.95 0.00 0.95 0.00 11.86 22:53:01 1.17 0.00 1.17 0.00 14.00 22:54:01 1.08 0.00 1.08 0.00 14.13 22:55:01 1.07 0.00 1.07 0.00 12.80 22:56:01 1.20 0.00 1.20 0.00 14.80 22:57:01 0.87 0.00 0.87 0.00 11.33 22:58:01 1.17 0.00 1.17 0.00 14.26 22:59:01 1.22 0.00 1.22 0.00 15.60 23:00:01 0.98 0.00 0.98 0.00 11.46 23:01:01 1.63 0.00 1.63 0.00 19.73 23:02:01 1.13 0.00 1.13 0.00 13.86 23:03:01 0.93 0.00 0.93 0.00 12.00 23:04:01 1.18 0.00 1.18 0.00 14.80 23:05:01 1.05 0.00 1.05 0.00 13.20 23:06:02 1.22 0.00 1.22 0.00 15.86 23:07:01 1.05 0.00 1.05 0.00 13.96 23:08:01 1.17 0.00 1.17 0.00 13.60 23:09:01 1.12 0.00 1.12 0.00 13.60 23:10:01 1.50 0.27 1.23 25.60 172.24 23:11:01 101.48 37.56 63.92 1769.44 6808.20 23:12:01 136.67 23.03 113.65 2764.81 15571.61 23:13:01 520.30 11.68 508.62 766.47 150632.18 23:14:01 18.41 0.60 17.81 27.33 4686.47 23:15:01 3.88 0.00 3.88 0.00 79.70 23:16:01 56.66 1.20 55.46 104.78 2468.29 Average: 7.04 0.52 6.51 35.11 2011.58 20:35:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 20:36:01 30647756 31891284 2291464 6.96 38644 1532980 1294292 3.81 621236 1416152 28 20:37:01 30644280 31887956 2294940 6.97 38724 1532992 1294292 3.81 624616 1416104 12 20:38:01 30641196 31885008 2298024 6.98 38812 1532992 1294292 3.81 627628 1416112 32 20:39:01 30638164 31882108 2301056 6.99 38892 1532996 1294292 3.81 630456 1416108 8 20:40:01 30638464 31882524 2300756 6.98 39004 1532992 1294292 3.81 630656 1416120 12 20:41:01 30636836 31880980 2302384 6.99 39092 1532996 1294292 3.81 632128 1416116 52 20:42:01 30635980 31880240 2303240 6.99 39172 1533008 1294292 3.81 633244 1416128 16 20:43:01 30633812 31878132 2305408 7.00 39252 1533012 1294292 3.81 634504 1416124 12 20:44:01 30630840 31876072 2308380 7.01 39344 1533632 1365456 4.02 635848 1416592 56 20:45:01 30629728 31874996 2309492 7.01 39408 1533620 1365456 4.02 637544 1416596 56 20:46:01 30617744 31863060 2321476 7.05 39456 1533644 1365456 4.02 650188 1416604 8 20:47:01 30616216 31861632 2323004 7.05 39496 1533648 1365456 4.02 651616 1416608 144 20:48:01 30614968 31860424 2324252 7.06 39528 1533652 1365456 4.02 652908 1416612 148 20:49:01 30613784 31859284 2325436 7.06 39560 1533656 1365456 4.02 653924 1416616 16 20:50:01 30609892 31856464 2329328 7.07 40596 1533644 1361852 4.01 656336 1416620 136 20:51:01 30609028 31855692 2330192 7.07 40636 1533664 1361852 4.01 657336 1416624 220 20:52:01 30607988 31854684 2331232 7.08 40660 1533664 1361852 4.01 658336 1416624 4 20:53:01 30607116 31853932 2332104 7.08 40684 1533800 1361852 4.01 659520 1416728 200 20:54:01 30606456 31853328 2332764 7.08 40716 1533804 1361852 4.01 660528 1416732 8 20:55:01 30605296 31852212 2333924 7.09 40772 1533800 1361852 4.01 661764 1416736 52 20:56:01 30599672 31846684 2339548 7.10 40804 1533820 1361852 4.01 666960 1416772 16 20:57:01 30598640 31845696 2340580 7.11 40836 1533824 1361852 4.01 668092 1416776 184 20:58:01 30598048 31845132 2341172 7.11 40860 1533828 1361852 4.01 668932 1416780 8 20:59:01 30596984 31844112 2342236 7.11 40892 1533832 1361852 4.01 670316 1416784 160 21:00:01 30595460 31842628 2343760 7.12 40924 1533828 1361852 4.01 671424 1416788 12 21:01:01 30594460 31841680 2344760 7.12 40964 1533840 1361852 4.01 672656 1416792 156 21:02:01 30593252 31840504 2345968 7.12 40988 1533848 1361852 4.01 673644 1416796 164 21:03:01 30592352 31839656 2346868 7.12 41028 1533856 1361852 4.01 674676 1416800 208 21:04:01 30591268 31838588 2347952 7.13 41068 1533860 1361852 4.01 675728 1416804 152 21:05:01 30590384 31837784 2348836 7.13 41108 1533852 1361852 4.01 676960 1416804 36 21:06:01 30588588 31836036 2350632 7.14 41140 1533864 1361852 4.01 678920 1416808 12 21:07:01 30587032 31834532 2352188 7.14 41172 1533868 1361852 4.01 680028 1416820 8 21:08:01 30585668 31833208 2353552 7.15 41204 1533872 1361852 4.01 681324 1416824 12 21:09:01 30584584 31832152 2354636 7.15 41228 1533876 1361852 4.01 682432 1416828 168 21:10:01 30583640 31831240 2355580 7.15 41268 1533864 1361852 4.01 683376 1416832 16 21:11:01 30578292 31825940 2360928 7.17 41300 1533888 1361852 4.01 689472 1416836 52 21:12:01 30556244 31803932 2382976 7.23 41324 1533892 1361852 4.01 710952 1416844 164 21:13:01 30540928 31790932 2398292 7.28 41372 1535620 1378532 4.06 724372 1417496 4 21:14:01 30540804 31790852 2398416 7.28 41412 1535624 1378532 4.06 724136 1417500 200 21:15:01 30540900 31790988 2398320 7.28 41464 1535616 1394620 4.10 723972 1417504 12 21:16:01 30540328 31790476 2398892 7.28 41500 1535632 1394620 4.10 724212 1417508 180 21:17:01 30539908 31790092 2399312 7.28 41528 1535632 1394620 4.10 724496 1417512 184 21:18:01 30540104 31790344 2399116 7.28 41552 1535640 1394620 4.10 724244 1417516 8 21:19:01 30540292 31790564 2398928 7.28 41584 1535640 1394620 4.10 724320 1417516 4 21:20:01 30540176 31790484 2399044 7.28 41612 1535648 1394620 4.10 724332 1417520 12 21:21:01 30540100 31790448 2399120 7.28 41652 1535656 1394620 4.10 724396 1417528 40 21:22:01 30540068 31790452 2399152 7.28 41676 1535660 1394620 4.10 724308 1417532 4 21:23:01 30540032 31790504 2399188 7.28 41764 1535664 1378404 4.06 724604 1417540 8 21:24:01 30539916 31790552 2399304 7.28 41820 1535796 1378404 4.06 724652 1417648 8 21:25:01 30540272 31790904 2398948 7.28 41868 1535792 1378404 4.06 724700 1417676 40 21:26:01 30539620 31790360 2399600 7.28 41956 1535804 1378404 4.06 724924 1417680 184 21:27:01 30539576 31790416 2399644 7.29 42044 1535808 1378404 4.06 724868 1417684 172 21:28:01 30539592 31790536 2399628 7.29 42140 1535812 1378404 4.06 724804 1417688 4 21:29:01 30539468 31790480 2399752 7.29 42204 1535816 1378404 4.06 725152 1417692 36 21:30:01 30539220 31790328 2400000 7.29 42308 1535824 1378404 4.06 725336 1417696 36 21:31:01 30539788 31790972 2399432 7.28 42356 1535828 1378404 4.06 725224 1417704 144 21:32:01 30539360 31790636 2399860 7.29 42444 1535832 1378404 4.06 725300 1417708 8 21:33:01 30539516 31790860 2399704 7.29 42508 1535832 1378404 4.06 725412 1417708 184 21:34:01 30539292 31790708 2399928 7.29 42596 1535836 1378404 4.06 725332 1417712 8 21:35:01 30539540 31791080 2399680 7.29 42692 1535840 1378404 4.06 725920 1417716 144 21:36:01 30539112 31790764 2400108 7.29 42804 1535844 1378404 4.06 725536 1417720 8 21:37:01 30538988 31790728 2400232 7.29 42876 1535852 1378404 4.06 725932 1417728 8 21:38:01 30538112 31789912 2401108 7.29 42972 1535856 1378404 4.06 726960 1417732 8 21:39:01 30537772 31789676 2401448 7.29 43068 1535864 1378404 4.06 727160 1417736 144 21:40:01 30537448 31789452 2401772 7.29 43164 1535868 1378404 4.06 727168 1417740 176 21:41:01 30537488 31789584 2401732 7.29 43212 1535872 1378404 4.06 727380 1417748 8 21:42:01 30537472 31789672 2401748 7.29 43300 1535876 1378404 4.06 727288 1417752 12 21:43:01 30537560 31789800 2401660 7.29 43332 1535880 1378404 4.06 727468 1417756 176 21:44:01 30537444 31789768 2401776 7.29 43404 1535884 1394484 4.10 727564 1417760 8 21:45:01 30537564 31789972 2401656 7.29 43484 1535888 1394484 4.10 727696 1417764 180 21:46:01 30537516 31790032 2401704 7.29 43596 1535888 1394484 4.10 727772 1417764 36 21:47:01 30537464 31790100 2401756 7.29 43684 1535892 1394484 4.10 727820 1417768 176 21:48:01 30537292 31790016 2401928 7.29 43772 1535896 1394484 4.10 728056 1417772 172 21:49:01 30537176 31789992 2402044 7.29 43836 1535904 1410668 4.15 728012 1417776 8 21:50:01 30537168 31790064 2402052 7.29 43924 1535908 1410668 4.15 728128 1417780 8 21:51:01 30537036 31789972 2402184 7.29 43956 1535912 1393988 4.10 728008 1417788 40 21:52:01 30536848 31789832 2402372 7.29 43996 1535916 1377724 4.05 728068 1417792 180 21:53:01 30536888 31789892 2402332 7.29 44020 1535920 1377724 4.05 728200 1417796 200 21:54:01 30536648 31789728 2402572 7.29 44100 1535928 1377724 4.05 727984 1417800 8 21:55:01 30536772 31790032 2402448 7.29 44132 1536060 1377724 4.05 728376 1417936 256 21:56:01 30536936 31790300 2402284 7.29 44228 1536060 1377724 4.05 728472 1417940 12 21:57:01 30536712 31790168 2402508 7.29 44308 1536068 1377724 4.05 728524 1417944 8 21:58:01 30536412 31789964 2402808 7.29 44396 1536076 1377724 4.05 728288 1417948 24 21:59:01 30536204 31789824 2403016 7.30 44444 1536080 1377724 4.05 728644 1417956 36 22:00:01 30536244 31789920 2402976 7.30 44476 1536080 1377724 4.05 728536 1417960 12 22:01:01 30536244 31789984 2402976 7.30 44532 1536084 1377724 4.05 728920 1417968 32 22:02:01 30536220 31789964 2403000 7.30 44556 1536088 1377724 4.05 728856 1417972 152 22:03:01 30535940 31789780 2403280 7.30 44628 1536092 1377724 4.05 729092 1417968 180 22:04:01 30534512 31788444 2404708 7.30 44724 1536096 1377724 4.05 730252 1417972 8 22:05:01 30534104 31788108 2405116 7.30 44788 1536100 1377724 4.05 730796 1417976 184 22:06:01 30534428 31788500 2404792 7.30 44884 1536088 1377724 4.05 730608 1417980 8 22:07:01 30534312 31788532 2404908 7.30 44972 1536112 1377724 4.05 730832 1417988 192 22:08:01 30534452 31788772 2404768 7.30 45068 1536116 1377724 4.05 730808 1417992 16 22:09:01 30534340 31788736 2404880 7.30 45132 1536120 1377724 4.05 730820 1417996 16 22:10:01 30534208 31788716 2405012 7.30 45228 1536124 1377724 4.05 731036 1418000 180 22:11:01 30534096 31788576 2405124 7.30 45292 1536132 1377724 4.05 731060 1418004 16 22:12:01 30534032 31788628 2405188 7.30 45388 1536136 1377724 4.05 731236 1418012 16 22:13:01 30530824 31785520 2408396 7.31 45420 1536140 1377724 4.05 734136 1416960 200 22:14:01 30530112 31784840 2409108 7.31 45452 1536140 1377724 4.05 734528 1416960 168 22:15:01 30530308 31785064 2408912 7.31 45476 1536144 1377724 4.05 734616 1416964 240 22:16:01 30529524 31784344 2409696 7.32 45524 1536148 1377724 4.05 735372 1416972 20 22:17:01 30528904 31783768 2410316 7.32 45580 1536156 1377724 4.05 735544 1416976 20 22:18:01 30528896 31783928 2410324 7.32 45604 1536288 1377724 4.05 735524 1417092 8 22:19:01 30528660 31783696 2410560 7.32 45628 1536292 1377724 4.05 735268 1417112 36 22:20:01 30528552 31783592 2410668 7.32 45652 1536296 1377724 4.05 736056 1417116 12 22:21:01 30528280 31783380 2410940 7.32 45676 1536300 1377724 4.05 736456 1417124 176 22:22:01 30527992 31783096 2411228 7.32 45700 1536304 1377724 4.05 736644 1417128 4 22:23:01 30528496 31783604 2410724 7.32 45724 1536308 1377724 4.05 736524 1417132 44 22:24:01 30528068 31783248 2411152 7.32 45756 1536312 1361408 4.01 736412 1417140 16 22:25:01 30527844 31783036 2411376 7.32 45796 1536320 1377616 4.05 736920 1417144 52 22:26:01 30527232 31782500 2411988 7.32 45852 1536324 1377616 4.05 737648 1417148 40 22:27:01 30527076 31782344 2412144 7.32 45876 1536328 1393696 4.10 737728 1417148 152 22:28:01 30526472 31781776 2412748 7.32 45916 1536332 1393696 4.10 737792 1417152 44 22:29:01 30526320 31781740 2412900 7.33 45948 1536336 1393696 4.10 737708 1417160 4 22:30:01 30526212 31781660 2413008 7.33 45980 1536340 1393696 4.10 737952 1417160 28 22:31:01 30526204 31781816 2413016 7.33 46020 1536464 1393696 4.10 738060 1417292 152 22:32:01 30526032 31781688 2413188 7.33 46044 1536476 1393696 4.10 738040 1417296 8 22:33:01 30525296 31781016 2413924 7.33 46076 1536480 1393696 4.10 739036 1417300 192 22:34:01 30524956 31780688 2414264 7.33 46100 1536484 1393696 4.10 739080 1417304 80 22:35:01 30524668 31780472 2414552 7.33 46140 1536492 1393696 4.10 739640 1417312 272 22:36:01 30524740 31780580 2414480 7.33 46188 1536488 1393696 4.10 739504 1417316 24 22:37:01 30524624 31780520 2414596 7.33 46212 1536500 1393696 4.10 739460 1417320 144 22:38:01 30524792 31780716 2414428 7.33 46244 1536504 1377452 4.05 739620 1417324 8 22:39:01 30524592 31780560 2414628 7.33 46276 1536508 1377452 4.05 739716 1417328 232 22:40:01 30524936 31780920 2414284 7.33 46308 1536512 1377452 4.05 739632 1417332 12 22:41:01 30524696 31780712 2414524 7.33 46340 1536512 1377452 4.05 739704 1417332 44 22:42:01 30524508 31780592 2414712 7.33 46372 1536516 1377452 4.05 739816 1417336 184 22:43:01 30524392 31780480 2414828 7.33 46404 1536520 1377452 4.05 739636 1417340 200 22:44:01 30524320 31780480 2414900 7.33 46428 1536528 1377452 4.05 739896 1417344 168 22:45:01 30524400 31780608 2414820 7.33 46452 1536536 1377452 4.05 739704 1417356 16 22:46:01 30524316 31780568 2414904 7.33 46500 1536536 1377452 4.05 739896 1417360 32 22:47:01 30524200 31780488 2415020 7.33 46524 1536544 1377452 4.05 739784 1417364 188 22:48:01 30523900 31780236 2415320 7.33 46548 1536548 1377452 4.05 740012 1417368 20 22:49:01 30523992 31780364 2415228 7.33 46588 1536552 1377452 4.05 739968 1417372 200 22:50:01 30523836 31780244 2415384 7.33 46612 1536556 1377452 4.05 740620 1417376 168 22:51:01 30523484 31779868 2415736 7.33 46644 1536560 1377452 4.05 741060 1417380 8 22:52:01 30523236 31779768 2415984 7.33 46668 1536692 1377452 4.05 741060 1417512 24 22:53:01 30523012 31779604 2416208 7.34 46708 1536700 1377452 4.05 741152 1417516 156 22:54:01 30522476 31779112 2416744 7.34 46740 1536704 1377452 4.05 741292 1417524 180 22:55:01 30522732 31779384 2416488 7.34 46772 1536704 1377452 4.05 741028 1417524 160 22:56:01 30522864 31779536 2416356 7.34 46812 1536708 1377452 4.05 741272 1417528 4 22:57:01 30522300 31779036 2416920 7.34 46844 1536712 1377452 4.05 741168 1417532 196 22:58:01 30522500 31779280 2416720 7.34 46868 1536716 1393532 4.10 741456 1417536 180 22:59:01 30522408 31779200 2416812 7.34 46892 1536720 1393532 4.10 741392 1417540 4 23:00:01 30522408 31779228 2416812 7.34 46932 1536724 1376896 4.05 741412 1417544 184 23:01:01 30522344 31779224 2416876 7.34 46972 1536728 1376896 4.05 741584 1417548 16 23:02:01 30522580 31779500 2416640 7.34 47012 1536736 1376896 4.05 741336 1417552 8 23:03:01 30522776 31779736 2416444 7.34 47044 1536744 1376896 4.05 741496 1417560 196 23:04:01 30522816 31779812 2416404 7.34 47068 1536748 1376896 4.05 741624 1417564 188 23:05:01 30522592 31779620 2416628 7.34 47092 1536752 1393000 4.10 741528 1417572 260 23:06:02 30522708 31779780 2416512 7.34 47140 1536756 1393000 4.10 741616 1417576 20 23:07:01 30522568 31779692 2416652 7.34 47188 1536760 1393000 4.10 741704 1417580 304 23:08:01 30522264 31779424 2416956 7.34 47216 1536760 1393000 4.10 741836 1417580 196 23:09:01 30522404 31779608 2416816 7.34 47248 1536764 1393000 4.10 741744 1417584 152 23:10:01 30494288 31757076 2444932 7.42 47320 1542228 1393000 4.10 764436 1422400 244 23:11:01 29956464 31616804 2982756 9.06 80844 1885780 1493996 4.40 962076 1715220 202272 23:12:01 27598468 31603596 5340752 16.21 121268 4107524 1433732 4.22 1076624 3831392 2007684 23:13:01 23849640 30279108 9089580 27.60 162412 6359296 7882272 23.19 2555020 5917192 424 23:14:01 22697192 29261856 10242028 31.09 163388 6489872 9196552 27.06 3621628 5992044 204 23:15:01 22675316 29240756 10263904 31.16 163520 6490440 9193144 27.05 3642048 5991972 252 23:16:01 25171492 31560620 7767728 23.58 165904 6329796 1577168 4.64 1351976 5850540 45984 Average: 30354935 31756860 2584285 7.85 47402 1675048 1514334 4.46 770974 1546513 14102 20:35:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 20:36:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:36:01 ens3 29.10 20.56 88.42 6.13 0.00 0.00 0.00 0.00 20:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:37:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:37:01 ens3 2.18 0.80 2.23 0.41 0.00 0.00 0.00 0.00 20:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:38:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:38:01 ens3 2.07 0.67 0.58 0.60 0.00 0.00 0.00 0.00 20:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:39:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:39:01 ens3 0.93 0.43 0.38 0.43 0.00 0.00 0.00 0.00 20:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:40:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:40:01 ens3 0.47 0.30 0.14 0.22 0.00 0.00 0.00 0.00 20:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:41:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:41:01 ens3 0.42 0.20 0.13 0.17 0.00 0.00 0.00 0.00 20:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:42:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:42:01 ens3 0.33 0.13 0.07 0.01 0.00 0.00 0.00 0.00 20:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:43:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:43:01 ens3 0.47 0.25 0.07 0.17 0.00 0.00 0.00 0.00 20:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:44:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:44:01 ens3 0.33 0.10 0.06 0.04 0.00 0.00 0.00 0.00 20:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:45:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:45:01 ens3 0.75 0.63 0.39 0.54 0.00 0.00 0.00 0.00 20:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:46:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:46:01 ens3 1.23 1.15 0.62 1.67 0.00 0.00 0.00 0.00 20:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:47:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:47:01 ens3 0.27 0.28 0.06 0.49 0.00 0.00 0.00 0.00 20:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:48:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:48:01 ens3 0.23 0.10 0.11 0.01 0.00 0.00 0.00 0.00 20:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:49:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:49:01 ens3 0.37 0.23 0.07 0.17 0.00 0.00 0.00 0.00 20:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:50:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:50:01 ens3 0.38 0.30 0.14 0.36 0.00 0.00 0.00 0.00 20:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:51:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:51:01 ens3 0.27 0.15 0.06 0.04 0.00 0.00 0.00 0.00 20:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:52:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:52:01 ens3 0.20 0.15 0.06 0.17 0.00 0.00 0.00 0.00 20:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:53:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:53:01 ens3 0.40 0.25 0.13 0.17 0.00 0.00 0.00 0.00 20:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:54:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:54:01 ens3 0.22 0.20 0.06 0.17 0.00 0.00 0.00 0.00 20:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:55:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:55:01 ens3 0.45 0.35 0.14 0.37 0.00 0.00 0.00 0.00 20:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:56:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:56:01 ens3 0.55 0.48 0.26 0.69 0.00 0.00 0.00 0.00 20:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:57:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:57:01 ens3 0.18 0.20 0.06 0.17 0.00 0.00 0.00 0.00 20:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:58:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:58:01 ens3 0.32 0.17 0.12 0.17 0.00 0.00 0.00 0.00 20:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 20:59:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 20:59:01 ens3 0.18 0.23 0.06 0.17 0.00 0.00 0.00 0.00 20:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:00:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:00:01 ens3 0.40 0.33 0.14 0.37 0.00 0.00 0.00 0.00 21:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:01:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:01:01 ens3 0.23 0.15 0.06 0.04 0.00 0.00 0.00 0.00 21:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:02:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:02:01 ens3 0.15 0.13 0.05 0.17 0.00 0.00 0.00 0.00 21:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:03:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:03:01 ens3 0.22 0.20 0.06 0.17 0.00 0.00 0.00 0.00 21:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:04:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:04:01 ens3 0.22 0.20 0.11 0.17 0.00 0.00 0.00 0.00 21:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:05:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:05:01 ens3 0.37 0.38 0.14 0.37 0.00 0.00 0.00 0.00 21:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:06:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:06:01 ens3 0.27 0.22 0.11 0.20 0.00 0.00 0.00 0.00 21:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:07:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:07:01 ens3 0.32 0.18 0.07 0.17 0.00 0.00 0.00 0.00 21:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:08:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:08:01 ens3 0.30 0.18 0.12 0.17 0.00 0.00 0.00 0.00 21:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:09:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:09:01 ens3 0.23 0.12 0.06 0.01 0.00 0.00 0.00 0.00 21:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:10:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:10:01 ens3 0.38 0.28 0.13 0.36 0.00 0.00 0.00 0.00 21:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:11:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:11:01 ens3 4.08 5.72 0.31 7.30 0.00 0.00 0.00 0.00 21:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:12:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:12:01 ens3 0.97 1.12 0.19 3.92 0.00 0.00 0.00 0.00 21:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:13:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:13:01 ens3 9.15 7.33 6.47 6.79 0.00 0.00 0.00 0.00 21:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:14:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:14:01 ens3 0.22 0.18 0.06 0.22 0.00 0.00 0.00 0.00 21:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:15:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:15:01 ens3 0.43 0.32 0.14 0.35 0.00 0.00 0.00 0.00 21:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:16:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:16:01 ens3 0.37 0.27 0.16 0.21 0.00 0.00 0.00 0.00 21:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:17:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:17:01 ens3 0.20 0.25 0.06 0.18 0.00 0.00 0.00 0.00 21:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:18:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:18:01 ens3 0.13 0.15 0.05 0.17 0.00 0.00 0.00 0.00 21:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:19:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:19:01 ens3 0.20 0.17 0.06 0.17 0.00 0.00 0.00 0.00 21:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:20:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:20:01 ens3 0.33 0.25 0.13 0.40 0.00 0.00 0.00 0.00 21:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:21:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:21:01 ens3 0.23 0.17 0.06 0.01 0.00 0.00 0.00 0.00 21:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:22:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:22:01 ens3 0.22 0.18 0.06 0.34 0.00 0.00 0.00 0.00 21:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:23:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:23:01 ens3 0.20 0.15 0.06 0.01 0.00 0.00 0.00 0.00 21:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:24:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:24:01 ens3 0.20 0.18 0.06 0.18 0.00 0.00 0.00 0.00 21:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:25:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:25:01 ens3 0.37 0.37 0.14 0.58 0.00 0.00 0.00 0.00 21:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:26:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:26:01 ens3 0.33 0.23 0.17 0.04 0.00 0.00 0.00 0.00 21:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:27:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:27:01 ens3 0.22 0.22 0.06 0.18 0.00 0.00 0.00 0.00 21:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:28:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:28:01 ens3 0.17 0.20 0.05 0.18 0.00 0.00 0.00 0.00 21:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:29:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:29:01 ens3 0.20 0.18 0.06 0.18 0.00 0.00 0.00 0.00 21:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:30:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:30:01 ens3 0.30 0.28 0.13 0.37 0.00 0.00 0.00 0.00 21:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:31:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:31:01 ens3 0.33 0.20 0.12 0.04 0.00 0.00 0.00 0.00 21:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:32:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:32:01 ens3 0.17 0.14 0.06 0.18 0.00 0.00 0.00 0.00 21:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:33:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:33:01 ens3 0.25 0.18 0.06 0.18 0.00 0.00 0.00 0.00 21:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:34:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:34:01 ens3 0.20 0.15 0.06 0.17 0.00 0.00 0.00 0.00 21:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:35:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:35:01 ens3 0.35 0.35 0.13 0.38 0.00 0.00 0.00 0.00 21:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:36:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:36:01 ens3 0.17 0.10 0.05 0.04 0.00 0.00 0.00 0.00 21:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:37:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:37:01 ens3 0.25 0.22 0.06 0.18 0.00 0.00 0.00 0.00 21:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:38:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:38:01 ens3 0.15 0.15 0.05 0.17 0.00 0.00 0.00 0.00 21:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:39:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:39:01 ens3 0.28 0.20 0.12 0.18 0.00 0.00 0.00 0.00 21:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:40:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:40:01 ens3 0.30 0.33 0.13 0.38 0.00 0.00 0.00 0.00 21:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:41:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:41:01 ens3 0.27 0.18 0.06 0.04 0.00 0.00 0.00 0.00 21:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:42:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:42:01 ens3 0.20 0.15 0.06 0.17 0.00 0.00 0.00 0.00 21:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:43:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:43:01 ens3 0.30 0.23 0.12 0.21 0.00 0.00 0.00 0.00 21:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:44:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:44:01 ens3 0.18 0.15 0.06 0.17 0.00 0.00 0.00 0.00 21:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:45:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:45:01 ens3 0.75 0.58 0.39 0.66 0.00 0.00 0.00 0.00 21:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:46:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:46:01 ens3 0.23 0.08 0.11 0.06 0.00 0.00 0.00 0.00 21:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:47:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:47:01 ens3 0.20 0.15 0.06 0.17 0.00 0.00 0.00 0.00 21:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:48:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:48:01 ens3 0.13 0.10 0.05 0.17 0.00 0.00 0.00 0.00 21:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:49:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:49:01 ens3 0.23 0.15 0.06 0.17 0.00 0.00 0.00 0.00 21:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:50:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:50:01 ens3 0.28 0.25 0.13 0.34 0.00 0.00 0.00 0.00 21:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:51:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:51:01 ens3 0.18 0.13 0.06 0.04 0.00 0.00 0.00 0.00 21:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:52:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:52:01 ens3 0.15 0.13 0.05 0.17 0.00 0.00 0.00 0.00 21:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:53:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:53:01 ens3 0.18 0.13 0.06 0.17 0.00 0.00 0.00 0.00 21:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:54:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:54:01 ens3 0.25 0.12 0.11 0.17 0.00 0.00 0.00 0.00 21:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:55:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:55:01 ens3 0.35 0.27 0.13 0.37 0.00 0.00 0.00 0.00 21:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:56:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:56:01 ens3 0.17 0.07 0.05 0.03 0.00 0.00 0.00 0.00 21:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:57:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:57:01 ens3 0.22 0.17 0.06 0.17 0.00 0.00 0.00 0.00 21:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:58:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:58:01 ens3 0.32 0.13 0.12 0.17 0.00 0.00 0.00 0.00 21:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 21:59:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 21:59:01 ens3 0.20 0.20 0.06 0.17 0.00 0.00 0.00 0.00 21:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:00:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:00:01 ens3 0.30 0.18 0.13 0.39 0.00 0.00 0.00 0.00 22:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:01:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:01:01 ens3 0.20 0.17 0.06 0.04 0.00 0.00 0.00 0.00 22:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:02:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:02:01 ens3 0.20 0.13 0.06 0.20 0.00 0.00 0.00 0.00 22:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:03:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:03:01 ens3 0.18 0.18 0.06 0.01 0.00 0.00 0.00 0.00 22:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:04:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:04:01 ens3 1.30 0.53 0.43 0.75 0.00 0.00 0.00 0.00 22:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:05:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:05:01 ens3 0.43 0.33 0.14 0.26 0.00 0.00 0.00 0.00 22:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:06:01 ens3 0.40 0.27 0.11 0.21 0.00 0.00 0.00 0.00 22:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:07:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:07:01 ens3 0.30 0.25 0.17 0.18 0.00 0.00 0.00 0.00 22:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:08:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:08:01 ens3 0.18 0.12 0.06 0.17 0.00 0.00 0.00 0.00 22:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:09:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:09:01 ens3 0.20 0.18 0.06 0.17 0.00 0.00 0.00 0.00 22:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:10:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:10:01 ens3 0.28 0.23 0.13 0.37 0.00 0.00 0.00 0.00 22:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:11:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:11:01 ens3 0.28 0.20 0.12 0.05 0.00 0.00 0.00 0.00 22:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:12:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:12:01 ens3 0.18 0.18 0.05 0.18 0.00 0.00 0.00 0.00 22:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:13:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:13:01 ens3 4.25 3.68 2.41 4.63 0.00 0.00 0.00 0.00 22:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:14:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:14:01 ens3 0.15 0.13 0.05 0.10 0.00 0.00 0.00 0.00 22:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:15:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:15:01 ens3 0.33 0.25 0.13 0.18 0.00 0.00 0.00 0.00 22:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:16:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:16:01 ens3 0.27 0.20 0.11 0.37 0.00 0.00 0.00 0.00 22:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:17:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:17:01 ens3 0.30 0.20 0.12 0.17 0.00 0.00 0.00 0.00 22:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:18:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:18:01 ens3 0.22 0.13 0.06 0.17 0.00 0.00 0.00 0.00 22:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:19:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:19:01 ens3 0.20 0.22 0.06 0.34 0.00 0.00 0.00 0.00 22:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:20:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:20:01 ens3 0.28 0.15 0.13 0.17 0.00 0.00 0.00 0.00 22:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:21:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:21:01 ens3 0.22 0.12 0.06 0.04 0.00 0.00 0.00 0.00 22:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:22:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:22:01 ens3 0.13 0.15 0.05 0.33 0.00 0.00 0.00 0.00 22:22:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:23:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:23:01 ens3 0.20 0.13 0.06 0.01 0.00 0.00 0.00 0.00 22:23:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:24:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:24:01 ens3 0.23 0.12 0.11 0.33 0.00 0.00 0.00 0.00 22:24:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:25:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:25:01 ens3 0.33 0.22 0.13 0.23 0.00 0.00 0.00 0.00 22:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:26:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:26:01 ens3 0.38 0.23 0.17 0.17 0.00 0.00 0.00 0.00 22:26:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:27:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:27:01 ens3 0.18 0.18 0.06 0.17 0.00 0.00 0.00 0.00 22:27:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:28:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:28:01 ens3 0.13 0.10 0.05 0.20 0.00 0.00 0.00 0.00 22:28:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:29:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:29:01 ens3 0.20 0.13 0.06 0.01 0.00 0.00 0.00 0.00 22:29:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:30:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:30:01 ens3 0.28 0.23 0.13 0.37 0.00 0.00 0.00 0.00 22:30:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:31:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:31:01 ens3 0.23 0.15 0.06 0.07 0.00 0.00 0.00 0.00 22:31:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:32:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:32:01 ens3 0.13 0.10 0.05 0.17 0.00 0.00 0.00 0.00 22:32:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:33:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:33:01 ens3 0.20 0.13 0.06 0.22 0.00 0.00 0.00 0.00 22:33:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:34:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:34:01 ens3 0.25 0.17 0.11 0.17 0.00 0.00 0.00 0.00 22:34:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:35:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:35:01 ens3 3.28 4.17 0.33 13.17 0.00 0.00 0.00 0.00 22:35:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:36:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:36:01 ens3 0.50 0.33 0.17 1.07 0.00 0.00 0.00 0.00 22:36:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:37:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:37:01 ens3 0.23 0.23 0.06 0.91 0.00 0.00 0.00 0.00 22:37:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:38:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:38:01 ens3 0.20 0.22 0.06 0.34 0.00 0.00 0.00 0.00 22:38:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:39:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:39:01 ens3 0.22 0.18 0.06 0.01 0.00 0.00 0.00 0.00 22:39:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:40:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:40:01 ens3 0.35 0.27 0.19 1.03 0.00 0.00 0.00 0.00 22:40:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:41:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:41:01 ens3 0.20 0.13 0.06 0.01 0.00 0.00 0.00 0.00 22:41:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:42:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:42:01 ens3 0.25 0.12 0.06 0.17 0.00 0.00 0.00 0.00 22:42:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:43:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:43:01 ens3 0.25 0.23 0.06 0.34 0.00 0.00 0.00 0.00 22:43:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:44:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:44:01 ens3 0.15 0.10 0.05 0.01 0.00 0.00 0.00 0.00 22:44:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:45:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:45:01 ens3 0.75 0.57 0.39 0.55 0.00 0.00 0.00 0.00 22:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:46:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:46:01 ens3 0.35 0.23 0.16 0.28 0.00 0.00 0.00 0.00 22:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:47:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:47:01 ens3 0.20 0.22 0.06 0.18 0.00 0.00 0.00 0.00 22:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:48:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:48:01 ens3 1.08 0.57 0.41 0.68 0.00 0.00 0.00 0.00 22:48:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:49:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:49:01 ens3 0.23 0.13 0.06 0.04 0.00 0.00 0.00 0.00 22:49:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:50:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:50:01 ens3 0.33 0.32 0.13 0.36 0.00 0.00 0.00 0.00 22:50:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:51:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:51:01 ens3 0.32 0.22 0.12 0.08 0.00 0.00 0.00 0.00 22:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:52:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:52:01 ens3 0.15 0.22 0.05 0.18 0.00 0.00 0.00 0.00 22:52:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:53:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:53:01 ens3 0.22 0.20 0.06 0.18 0.00 0.00 0.00 0.00 22:53:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:54:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:54:01 ens3 0.13 0.13 0.05 0.17 0.00 0.00 0.00 0.00 22:54:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:55:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:55:01 ens3 0.33 0.27 0.13 0.24 0.00 0.00 0.00 0.00 22:55:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:56:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:56:01 ens3 0.17 0.13 0.11 0.17 0.00 0.00 0.00 0.00 22:56:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:57:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:57:01 ens3 0.40 0.23 0.07 0.18 0.00 0.00 0.00 0.00 22:57:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:58:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:58:01 ens3 0.15 0.10 0.05 0.17 0.00 0.00 0.00 0.00 22:58:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22:59:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 22:59:01 ens3 0.22 0.20 0.06 0.18 0.00 0.00 0.00 0.00 22:59:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:00:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:00:01 ens3 0.40 0.38 0.14 0.42 0.00 0.00 0.00 0.00 23:00:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:01:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 23:01:01 ens3 0.18 0.15 0.06 0.03 0.00 0.00 0.00 0.00 23:01:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:02:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:02:01 ens3 0.15 0.15 0.05 0.33 0.00 0.00 0.00 0.00 23:02:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:03:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 23:03:01 ens3 0.25 0.15 0.06 0.17 0.00 0.00 0.00 0.00 23:03:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:04:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:04:01 ens3 0.67 0.12 0.11 0.01 0.00 0.00 0.00 0.00 23:04:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:05:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 23:05:01 ens3 0.87 0.55 0.49 0.55 0.00 0.00 0.00 0.00 23:05:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:06:02 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:06:02 ens3 0.47 0.47 0.21 0.55 0.00 0.00 0.00 0.00 23:06:02 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:07:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 23:07:01 ens3 0.19 0.19 0.06 0.18 0.00 0.00 0.00 0.00 23:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:08:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:08:01 ens3 0.13 0.12 0.05 0.17 0.00 0.00 0.00 0.00 23:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:09:01 lo 0.20 0.20 0.01 0.01 0.00 0.00 0.00 0.00 23:09:01 ens3 0.33 0.23 0.12 0.18 0.00 0.00 0.00 0.00 23:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:10:01 lo 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:10:01 ens3 61.24 49.48 93.82 8.55 0.00 0.00 0.00 0.00 23:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:11:01 lo 1.67 1.67 0.17 0.17 0.00 0.00 0.00 0.00 23:11:01 ens3 121.68 86.50 1171.32 35.01 0.00 0.00 0.00 0.00 23:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 lo 9.20 9.20 0.90 0.90 0.00 0.00 0.00 0.00 23:12:01 br-78efe80af11c 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 655.56 360.56 16335.68 30.63 0.00 0.00 0.00 0.00 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 lo 4.63 4.63 0.43 0.43 0.00 0.00 0.00 0.00 23:13:01 veth4b9cacf 1.80 1.90 0.17 0.19 0.00 0.00 0.00 0.00 23:13:01 vetha811a77 15.10 14.93 8.52 7.38 0.00 0.00 0.00 0.00 23:13:01 veth5de5a79 0.50 0.62 0.03 0.04 0.00 0.00 0.00 0.00 23:14:01 lo 5.20 5.20 3.51 3.51 0.00 0.00 0.00 0.00 23:14:01 veth4b9cacf 17.46 14.30 2.10 2.16 0.00 0.00 0.00 0.00 23:14:01 vetha811a77 38.26 32.41 11.47 33.05 0.00 0.00 0.00 0.00 23:14:01 veth5de5a79 3.68 4.72 0.73 0.48 0.00 0.00 0.00 0.00 23:15:01 lo 5.18 5.18 0.38 0.38 0.00 0.00 0.00 0.00 23:15:01 veth4b9cacf 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 23:15:01 vetha811a77 0.32 0.35 0.58 0.03 0.00 0.00 0.00 0.00 23:15:01 veth5de5a79 3.22 4.68 0.66 0.36 0.00 0.00 0.00 0.00 23:16:01 lo 5.13 5.13 0.48 0.48 0.00 0.00 0.00 0.00 23:16:01 ens3 1785.77 1066.76 37287.10 236.54 0.00 0.00 0.00 0.00 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: lo 0.29 0.29 0.04 0.04 0.00 0.00 0.00 0.00 Average: ens3 9.94 5.81 228.60 1.32 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-997) 01/30/24 _x86_64_ (8 CPU) 20:34:24 LINUX RESTART (8 CPU) 20:35:02 CPU %user %nice %system %iowait %steal %idle 20:36:01 all 0.46 0.00 0.05 1.39 0.01 98.08 20:36:01 0 0.29 0.00 0.07 10.94 0.00 88.71 20:36:01 1 0.37 0.00 0.05 0.00 0.00 99.58 20:36:01 2 0.44 0.00 0.10 0.15 0.02 99.29 20:36:01 3 0.42 0.00 0.02 0.00 0.00 99.56 20:36:01 4 0.74 0.00 0.05 0.00 0.02 99.19 20:36:01 5 1.29 0.00 0.00 0.00 0.00 98.71 20:36:01 6 0.07 0.00 0.08 0.00 0.00 99.85 20:36:01 7 0.10 0.00 0.07 0.07 0.00 99.76 20:37:01 all 0.11 0.00 0.01 1.28 0.01 98.58 20:37:01 0 0.02 0.00 0.02 10.29 0.00 89.68 20:37:01 1 0.05 0.00 0.02 0.00 0.00 99.93 20:37:01 2 0.02 0.00 0.03 0.00 0.02 99.93 20:37:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:37:01 4 0.70 0.00 0.00 0.00 0.00 99.30 20:37:01 5 0.05 0.00 0.02 0.00 0.00 99.93 20:37:01 6 0.02 0.00 0.00 0.00 0.00 99.98 20:37:01 7 0.05 0.00 0.00 0.00 0.02 99.93 20:38:01 all 0.06 0.00 0.01 1.20 0.01 98.73 20:38:01 0 0.00 0.00 0.03 9.58 0.02 90.37 20:38:01 1 0.02 0.00 0.00 0.00 0.00 99.98 20:38:01 2 0.08 0.00 0.00 0.00 0.03 99.88 20:38:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:38:01 4 0.25 0.00 0.00 0.00 0.02 99.73 20:38:01 5 0.03 0.00 0.02 0.00 0.02 99.93 20:38:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:38:01 7 0.03 0.00 0.02 0.00 0.02 99.93 20:39:01 all 0.04 0.00 0.01 1.24 0.01 98.71 20:39:01 0 0.00 0.00 0.02 9.91 0.00 90.08 20:39:01 1 0.05 0.00 0.02 0.00 0.00 99.93 20:39:01 2 0.02 0.00 0.02 0.00 0.02 99.95 20:39:01 3 0.02 0.00 0.02 0.00 0.00 99.97 20:39:01 4 0.07 0.00 0.02 0.00 0.00 99.92 20:39:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:39:01 6 0.00 0.00 0.00 0.00 0.02 99.98 20:39:01 7 0.08 0.00 0.00 0.00 0.02 99.90 20:40:01 all 0.01 0.00 0.01 1.29 0.01 98.68 20:40:01 0 0.00 0.00 0.02 10.29 0.00 89.70 20:40:01 1 0.05 0.00 0.00 0.00 0.00 99.95 20:40:01 2 0.00 0.00 0.02 0.00 0.02 99.97 20:40:01 3 0.02 0.00 0.00 0.00 0.00 99.98 20:40:01 4 0.05 0.00 0.00 0.00 0.02 99.93 20:40:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:40:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:40:01 7 0.02 0.00 0.02 0.00 0.02 99.95 20:41:01 all 0.01 0.00 0.01 1.22 0.01 98.76 20:41:01 0 0.00 0.00 0.00 9.75 0.00 90.25 20:41:01 1 0.02 0.00 0.02 0.00 0.00 99.97 20:41:01 2 0.03 0.00 0.00 0.00 0.03 99.93 20:41:01 3 0.00 0.00 0.02 0.00 0.00 99.98 20:41:01 4 0.02 0.00 0.02 0.00 0.00 99.97 20:41:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:41:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:41:01 7 0.02 0.00 0.00 0.00 0.00 99.98 20:42:01 all 0.02 0.00 0.00 1.18 0.01 98.79 20:42:01 0 0.00 0.00 0.02 9.45 0.02 90.52 20:42:01 1 0.05 0.00 0.00 0.00 0.00 99.95 20:42:01 2 0.02 0.00 0.02 0.00 0.02 99.95 20:42:01 3 0.02 0.00 0.00 0.00 0.00 99.98 20:42:01 4 0.03 0.00 0.00 0.00 0.00 99.97 20:42:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:42:01 6 0.02 0.00 0.00 0.00 0.00 99.98 20:42:01 7 0.02 0.00 0.00 0.00 0.02 99.97 20:43:01 all 0.12 0.00 0.01 0.84 0.01 99.03 20:43:01 0 0.00 0.00 0.03 6.72 0.00 93.25 20:43:01 1 0.02 0.00 0.00 0.00 0.02 99.97 20:43:01 2 0.02 0.00 0.02 0.00 0.03 99.93 20:43:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:43:01 4 0.89 0.00 0.00 0.00 0.02 99.09 20:43:01 5 0.00 0.00 0.02 0.00 0.00 99.98 20:43:01 6 0.00 0.00 0.02 0.00 0.00 99.98 20:43:01 7 0.00 0.00 0.00 0.00 0.02 99.98 20:44:01 all 0.06 0.00 0.02 0.01 0.00 99.91 20:44:01 0 0.02 0.00 0.02 0.03 0.00 99.93 20:44:01 1 0.02 0.00 0.02 0.00 0.00 99.97 20:44:01 2 0.03 0.00 0.02 0.00 0.02 99.93 20:44:01 3 0.00 0.00 0.02 0.00 0.00 99.98 20:44:01 4 0.00 0.00 0.02 0.00 0.00 99.98 20:44:01 5 0.25 0.00 0.02 0.03 0.00 99.70 20:44:01 6 0.05 0.00 0.02 0.00 0.00 99.93 20:44:01 7 0.08 0.00 0.03 0.00 0.00 99.88 20:45:01 all 0.02 0.00 0.01 0.01 0.00 99.96 20:45:01 0 0.00 0.00 0.00 0.03 0.00 99.97 20:45:01 1 0.03 0.00 0.00 0.00 0.00 99.97 20:45:01 2 0.03 0.00 0.03 0.00 0.02 99.92 20:45:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:45:01 4 0.03 0.00 0.00 0.00 0.02 99.95 20:45:01 5 0.00 0.00 0.00 0.00 0.00 100.00 20:45:01 6 0.03 0.00 0.00 0.00 0.00 99.97 20:45:01 7 0.03 0.00 0.02 0.00 0.00 99.95 20:46:01 all 0.04 0.00 0.01 0.00 0.01 99.93 20:46:01 0 0.00 0.00 0.00 0.02 0.00 99.98 20:46:01 1 0.15 0.00 0.05 0.00 0.00 99.80 20:46:01 2 0.02 0.00 0.02 0.00 0.02 99.95 20:46:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:46:01 4 0.00 0.00 0.00 0.00 0.00 100.00 20:46:01 5 0.00 0.00 0.00 0.00 0.00 100.00 20:46:01 6 0.15 0.00 0.02 0.00 0.00 99.83 20:46:01 7 0.03 0.00 0.00 0.02 0.00 99.95 20:46:01 CPU %user %nice %system %iowait %steal %idle 20:47:01 all 0.01 0.00 0.01 0.00 0.00 99.97 20:47:01 0 0.00 0.00 0.02 0.03 0.00 99.95 20:47:01 1 0.00 0.00 0.00 0.00 0.00 100.00 20:47:01 2 0.02 0.00 0.02 0.00 0.03 99.93 20:47:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:47:01 4 0.00 0.00 0.00 0.00 0.00 100.00 20:47:01 5 0.00 0.00 0.00 0.00 0.00 100.00 20:47:01 6 0.03 0.00 0.03 0.00 0.00 99.93 20:47:01 7 0.02 0.00 0.00 0.00 0.00 99.98 20:48:01 all 0.02 0.00 0.00 0.00 0.01 99.97 20:48:01 0 0.00 0.00 0.00 0.02 0.00 99.98 20:48:01 1 0.03 0.00 0.02 0.00 0.00 99.95 20:48:01 2 0.03 0.00 0.00 0.00 0.02 99.95 20:48:01 3 0.00 0.00 0.00 0.00 0.02 99.98 20:48:01 4 0.00 0.00 0.02 0.00 0.02 99.97 20:48:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:48:01 6 0.03 0.00 0.00 0.00 0.00 99.97 20:48:01 7 0.02 0.00 0.00 0.00 0.02 99.97 20:49:01 all 0.01 0.00 0.01 0.00 0.00 99.98 20:49:01 0 0.00 0.00 0.02 0.02 0.00 99.97 20:49:01 1 0.02 0.00 0.00 0.00 0.00 99.98 20:49:01 2 0.02 0.00 0.03 0.00 0.02 99.93 20:49:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:49:01 4 0.03 0.00 0.00 0.00 0.00 99.97 20:49:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:49:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:49:01 7 0.00 0.00 0.00 0.00 0.00 100.00 20:50:01 all 0.01 0.00 0.01 0.02 0.00 99.95 20:50:01 0 0.00 0.00 0.00 0.08 0.02 99.90 20:50:01 1 0.02 0.00 0.00 0.00 0.00 99.98 20:50:01 2 0.00 0.00 0.00 0.00 0.03 99.97 20:50:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:50:01 4 0.00 0.00 0.02 0.00 0.02 99.97 20:50:01 5 0.03 0.00 0.02 0.00 0.00 99.95 20:50:01 6 0.03 0.00 0.05 0.05 0.00 99.87 20:50:01 7 0.03 0.00 0.02 0.02 0.00 99.93 20:51:01 all 0.01 0.00 0.01 0.00 0.00 99.98 20:51:01 0 0.00 0.00 0.00 0.02 0.00 99.98 20:51:01 1 0.00 0.00 0.00 0.00 0.00 100.00 20:51:01 2 0.02 0.00 0.02 0.00 0.02 99.95 20:51:01 3 0.00 0.00 0.02 0.00 0.00 99.98 20:51:01 4 0.03 0.00 0.00 0.02 0.00 99.95 20:51:01 5 0.00 0.00 0.00 0.00 0.00 100.00 20:51:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:51:01 7 0.00 0.00 0.00 0.00 0.00 100.00 20:52:01 all 0.01 0.00 0.01 0.00 0.00 99.97 20:52:01 0 0.02 0.00 0.02 0.02 0.00 99.95 20:52:01 1 0.02 0.00 0.00 0.00 0.00 99.98 20:52:01 2 0.03 0.00 0.02 0.00 0.02 99.93 20:52:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:52:01 4 0.00 0.00 0.00 0.00 0.00 100.00 20:52:01 5 0.02 0.00 0.02 0.00 0.00 99.97 20:52:01 6 0.00 0.00 0.00 0.00 0.02 99.98 20:52:01 7 0.02 0.00 0.02 0.00 0.02 99.95 20:53:01 all 0.02 0.00 0.01 0.01 0.01 99.96 20:53:01 0 0.00 0.00 0.02 0.02 0.00 99.97 20:53:01 1 0.02 0.00 0.00 0.00 0.00 99.98 20:53:01 2 0.00 0.00 0.00 0.00 0.03 99.97 20:53:01 3 0.00 0.00 0.02 0.00 0.00 99.98 20:53:01 4 0.03 0.00 0.02 0.00 0.02 99.93 20:53:01 5 0.02 0.00 0.02 0.00 0.00 99.97 20:53:01 6 0.03 0.00 0.00 0.00 0.00 99.97 20:53:01 7 0.02 0.00 0.00 0.07 0.00 99.92 20:54:01 all 0.01 0.00 0.00 0.00 0.01 99.98 20:54:01 0 0.00 0.00 0.00 0.02 0.00 99.98 20:54:01 1 0.02 0.00 0.00 0.00 0.02 99.97 20:54:01 2 0.02 0.00 0.02 0.00 0.02 99.95 20:54:01 3 0.02 0.00 0.00 0.00 0.00 99.98 20:54:01 4 0.00 0.00 0.00 0.00 0.00 100.00 20:54:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:54:01 6 0.02 0.00 0.00 0.00 0.00 99.98 20:54:01 7 0.03 0.00 0.00 0.00 0.00 99.97 20:55:01 all 0.01 0.00 0.00 0.00 0.00 99.98 20:55:01 0 0.00 0.00 0.02 0.03 0.00 99.95 20:55:01 1 0.00 0.00 0.00 0.00 0.00 100.00 20:55:01 2 0.00 0.00 0.02 0.02 0.02 99.95 20:55:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:55:01 4 0.03 0.00 0.02 0.00 0.02 99.93 20:55:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:55:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:55:01 7 0.02 0.00 0.00 0.00 0.00 99.98 20:56:01 all 0.01 0.00 0.01 0.00 0.00 99.97 20:56:01 0 0.00 0.00 0.00 0.02 0.00 99.98 20:56:01 1 0.03 0.00 0.02 0.00 0.00 99.95 20:56:01 2 0.02 0.00 0.00 0.00 0.03 99.95 20:56:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:56:01 4 0.03 0.00 0.00 0.00 0.00 99.97 20:56:01 5 0.00 0.00 0.02 0.00 0.00 99.98 20:56:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:56:01 7 0.02 0.00 0.02 0.00 0.02 99.95 20:57:01 all 0.04 0.00 0.00 0.00 0.00 99.95 20:57:01 0 0.00 0.00 0.00 0.02 0.00 99.98 20:57:01 1 0.00 0.00 0.02 0.00 0.00 99.98 20:57:01 2 0.00 0.00 0.02 0.02 0.02 99.95 20:57:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:57:01 4 0.27 0.00 0.00 0.00 0.02 99.72 20:57:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:57:01 6 0.02 0.00 0.00 0.00 0.00 99.98 20:57:01 7 0.03 0.00 0.00 0.00 0.00 99.97 20:57:01 CPU %user %nice %system %iowait %steal %idle 20:58:01 all 0.01 0.00 0.00 0.00 0.01 99.98 20:58:01 0 0.02 0.00 0.00 0.02 0.00 99.97 20:58:01 1 0.00 0.00 0.00 0.00 0.00 100.00 20:58:01 2 0.03 0.00 0.02 0.00 0.02 99.93 20:58:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:58:01 4 0.00 0.00 0.00 0.00 0.02 99.98 20:58:01 5 0.00 0.00 0.00 0.00 0.00 100.00 20:58:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:58:01 7 0.02 0.00 0.00 0.00 0.00 99.98 20:59:01 all 0.01 0.00 0.00 0.01 0.00 99.97 20:59:01 0 0.00 0.00 0.02 0.03 0.00 99.95 20:59:01 1 0.02 0.00 0.00 0.00 0.00 99.98 20:59:01 2 0.02 0.00 0.02 0.02 0.03 99.92 20:59:01 3 0.00 0.00 0.00 0.00 0.00 100.00 20:59:01 4 0.00 0.00 0.00 0.00 0.00 100.00 20:59:01 5 0.02 0.00 0.00 0.00 0.00 99.98 20:59:01 6 0.00 0.00 0.00 0.00 0.00 100.00 20:59:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:00:01 all 0.01 0.00 0.00 0.00 0.00 99.98 21:00:01 0 0.00 0.00 0.00 0.02 0.00 99.98 21:00:01 1 0.02 0.00 0.00 0.00 0.00 99.98 21:00:01 2 0.02 0.00 0.02 0.00 0.02 99.95 21:00:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:00:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:00:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:00:01 6 0.00 0.00 0.02 0.00 0.00 99.98 21:00:01 7 0.02 0.00 0.02 0.00 0.00 99.97 21:01:01 all 0.01 0.00 0.01 0.00 0.00 99.98 21:01:01 0 0.00 0.00 0.00 0.02 0.00 99.98 21:01:01 1 0.02 0.00 0.00 0.00 0.00 99.98 21:01:01 2 0.02 0.00 0.02 0.00 0.03 99.93 21:01:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:01:01 4 0.02 0.00 0.00 0.00 0.02 99.97 21:01:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:01:01 6 0.02 0.00 0.02 0.00 0.00 99.97 21:01:01 7 0.02 0.00 0.02 0.00 0.02 99.95 21:02:01 all 0.00 0.00 0.00 0.00 0.01 99.98 21:02:01 0 0.00 0.00 0.00 0.02 0.00 99.98 21:02:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:02:01 2 0.02 0.00 0.02 0.02 0.02 99.93 21:02:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:02:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:02:01 5 0.02 0.00 0.00 0.00 0.02 99.97 21:02:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:02:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:03:01 all 0.01 0.00 0.01 0.01 0.00 99.97 21:03:01 0 0.00 0.00 0.00 0.03 0.00 99.97 21:03:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:03:01 2 0.02 0.00 0.02 0.00 0.03 99.93 21:03:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:03:01 4 0.03 0.00 0.00 0.00 0.00 99.97 21:03:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:03:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:03:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:04:01 all 0.01 0.00 0.00 0.00 0.01 99.98 21:04:01 0 0.00 0.00 0.02 0.03 0.02 99.93 21:04:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:04:01 2 0.02 0.00 0.02 0.02 0.03 99.92 21:04:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:04:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:04:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:04:01 6 0.00 0.00 0.00 0.00 0.02 99.98 21:04:01 7 0.00 0.00 0.02 0.00 0.00 99.98 21:05:01 all 0.02 0.00 0.01 0.00 0.01 99.97 21:05:01 0 0.02 0.00 0.02 0.03 0.00 99.93 21:05:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:05:01 2 0.05 0.00 0.00 0.00 0.03 99.92 21:05:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:05:01 4 0.03 0.00 0.00 0.00 0.00 99.97 21:05:01 5 0.02 0.00 0.02 0.00 0.02 99.95 21:05:01 6 0.02 0.00 0.02 0.00 0.00 99.97 21:05:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:06:01 all 0.15 0.00 0.01 0.00 0.00 99.83 21:06:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:06:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:06:01 2 0.00 0.00 0.03 0.02 0.02 99.93 21:06:01 3 0.00 0.00 0.02 0.00 0.00 99.98 21:06:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:06:01 5 1.19 0.00 0.02 0.00 0.00 98.79 21:06:01 6 0.02 0.00 0.02 0.00 0.00 99.97 21:06:01 7 0.03 0.00 0.00 0.00 0.02 99.95 21:07:01 all 0.12 0.00 0.01 0.00 0.00 99.86 21:07:01 0 0.02 0.00 0.00 0.03 0.00 99.95 21:07:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:07:01 2 0.03 0.00 0.00 0.00 0.03 99.93 21:07:01 3 0.02 0.00 0.00 0.00 0.00 99.98 21:07:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:07:01 5 0.83 0.00 0.03 0.00 0.00 99.14 21:07:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:07:01 7 0.03 0.00 0.00 0.00 0.00 99.97 21:08:01 all 0.01 0.00 0.00 0.00 0.00 99.98 21:08:01 0 0.02 0.00 0.00 0.03 0.00 99.95 21:08:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:08:01 2 0.02 0.00 0.02 0.00 0.02 99.95 21:08:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:08:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:08:01 5 0.00 0.00 0.02 0.00 0.00 99.98 21:08:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:08:01 7 0.02 0.00 0.00 0.00 0.02 99.97 21:08:01 CPU %user %nice %system %iowait %steal %idle 21:09:01 all 0.20 0.00 0.01 0.00 0.00 99.79 21:09:01 0 0.00 0.00 0.00 0.02 0.00 99.98 21:09:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:09:01 2 0.05 0.00 0.02 0.02 0.02 99.90 21:09:01 3 0.00 0.00 0.02 0.00 0.00 99.98 21:09:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:09:01 5 0.03 0.00 0.02 0.00 0.00 99.95 21:09:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:09:01 7 1.43 0.00 0.00 0.00 0.00 98.57 21:10:01 all 0.01 0.00 0.00 0.01 0.01 99.97 21:10:01 0 0.00 0.00 0.02 0.02 0.00 99.97 21:10:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:10:01 2 0.02 0.00 0.00 0.05 0.02 99.92 21:10:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:10:01 4 0.00 0.00 0.02 0.00 0.00 99.98 21:10:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:10:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:10:01 7 0.05 0.00 0.00 0.00 0.02 99.93 21:11:01 all 0.04 0.00 0.01 0.00 0.00 99.94 21:11:01 0 0.02 0.00 0.02 0.03 0.00 99.93 21:11:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:11:01 2 0.15 0.00 0.03 0.02 0.02 99.78 21:11:01 3 0.02 0.00 0.00 0.00 0.02 99.97 21:11:01 4 0.03 0.00 0.03 0.00 0.02 99.92 21:11:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:11:01 6 0.10 0.00 0.02 0.00 0.00 99.88 21:11:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:12:01 all 0.14 0.00 0.01 0.00 0.01 99.85 21:12:01 0 0.30 0.00 0.00 0.00 0.00 99.70 21:12:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:12:01 2 0.73 0.00 0.02 0.00 0.03 99.22 21:12:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:12:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:12:01 5 0.03 0.00 0.02 0.00 0.00 99.95 21:12:01 6 0.02 0.00 0.00 0.00 0.02 99.97 21:12:01 7 0.00 0.00 0.02 0.00 0.00 99.98 21:13:01 all 0.29 0.00 0.03 0.03 0.00 99.64 21:13:01 0 0.25 0.00 0.03 0.20 0.00 99.52 21:13:01 1 0.40 0.00 0.05 0.07 0.00 99.48 21:13:01 2 0.42 0.00 0.03 0.02 0.02 99.52 21:13:01 3 0.22 0.00 0.03 0.00 0.00 99.75 21:13:01 4 0.17 0.00 0.05 0.00 0.00 99.78 21:13:01 5 0.03 0.00 0.03 0.00 0.00 99.93 21:13:01 6 0.77 0.00 0.02 0.00 0.02 99.20 21:13:01 7 0.03 0.00 0.02 0.00 0.00 99.95 21:14:01 all 0.01 0.00 0.00 0.00 0.00 99.97 21:14:01 0 0.00 0.00 0.00 0.02 0.02 99.97 21:14:01 1 0.02 0.00 0.00 0.00 0.00 99.98 21:14:01 2 0.03 0.00 0.02 0.00 0.02 99.93 21:14:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:14:01 4 0.02 0.00 0.02 0.00 0.00 99.97 21:14:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:14:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:14:01 7 0.02 0.00 0.02 0.00 0.02 99.95 21:15:01 all 0.10 0.00 0.01 0.01 0.00 99.89 21:15:01 0 0.00 0.00 0.00 0.03 0.00 99.97 21:15:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:15:01 2 0.02 0.00 0.03 0.00 0.02 99.93 21:15:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:15:01 4 0.05 0.00 0.02 0.00 0.00 99.93 21:15:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:15:01 6 0.66 0.00 0.00 0.02 0.00 99.32 21:15:01 7 0.03 0.00 0.02 0.00 0.00 99.95 21:16:01 all 0.02 0.00 0.01 0.03 0.00 99.94 21:16:01 0 0.00 0.00 0.00 0.18 0.00 99.82 21:16:01 1 0.02 0.00 0.00 0.02 0.00 99.97 21:16:01 2 0.03 0.00 0.02 0.00 0.02 99.93 21:16:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:16:01 4 0.03 0.00 0.00 0.00 0.00 99.97 21:16:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:16:01 6 0.08 0.00 0.00 0.00 0.02 99.90 21:16:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:17:01 all 0.05 0.00 0.01 0.00 0.01 99.93 21:17:01 0 0.02 0.00 0.00 0.02 0.00 99.97 21:17:01 1 0.03 0.00 0.00 0.00 0.00 99.97 21:17:01 2 0.03 0.00 0.05 0.02 0.02 99.88 21:17:01 3 0.02 0.00 0.00 0.00 0.00 99.98 21:17:01 4 0.03 0.00 0.00 0.00 0.00 99.97 21:17:01 5 0.00 0.00 0.02 0.02 0.00 99.97 21:17:01 6 0.28 0.00 0.02 0.00 0.00 99.70 21:17:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:18:01 all 0.03 0.00 0.01 0.01 0.00 99.95 21:18:01 0 0.02 0.00 0.00 0.03 0.00 99.95 21:18:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:18:01 2 0.03 0.00 0.03 0.00 0.02 99.92 21:18:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:18:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:18:01 5 0.02 0.00 0.00 0.00 0.02 99.97 21:18:01 6 0.18 0.00 0.00 0.00 0.00 99.82 21:18:01 7 0.00 0.00 0.02 0.00 0.00 99.98 21:19:01 all 0.10 0.00 0.01 0.08 0.00 99.80 21:19:01 0 0.00 0.00 0.00 0.63 0.00 99.37 21:19:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:19:01 2 0.03 0.00 0.02 0.05 0.02 99.88 21:19:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:19:01 4 0.02 0.00 0.02 0.00 0.00 99.97 21:19:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:19:01 6 0.70 0.00 0.02 0.00 0.00 99.29 21:19:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:19:01 CPU %user %nice %system %iowait %steal %idle 21:20:01 all 0.02 0.00 0.01 0.02 0.00 99.95 21:20:01 0 0.00 0.00 0.03 0.13 0.00 99.83 21:20:01 1 0.03 0.00 0.00 0.00 0.02 99.95 21:20:01 2 0.03 0.00 0.02 0.00 0.00 99.95 21:20:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:20:01 4 0.02 0.00 0.02 0.00 0.00 99.97 21:20:01 5 0.02 0.00 0.02 0.00 0.00 99.97 21:20:01 6 0.08 0.00 0.00 0.00 0.02 99.90 21:20:01 7 0.02 0.00 0.02 0.00 0.02 99.95 21:21:01 all 0.07 0.00 0.00 0.07 0.00 99.86 21:21:01 0 0.02 0.00 0.00 0.50 0.00 99.48 21:21:01 1 0.05 0.00 0.03 0.00 0.02 99.90 21:21:01 2 0.00 0.00 0.00 0.00 0.00 100.00 21:21:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:21:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:21:01 5 0.00 0.00 0.02 0.00 0.00 99.98 21:21:01 6 0.47 0.00 0.00 0.00 0.00 99.53 21:21:01 7 0.02 0.00 0.00 0.03 0.00 99.95 21:22:01 all 0.01 0.00 0.00 0.04 0.00 99.95 21:22:01 0 0.00 0.00 0.00 0.28 0.00 99.72 21:22:01 1 0.03 0.00 0.00 0.00 0.02 99.95 21:22:01 2 0.02 0.00 0.00 0.00 0.00 99.98 21:22:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:22:01 4 0.02 0.00 0.00 0.00 0.02 99.97 21:22:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:22:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:22:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:23:01 all 0.22 0.00 0.00 0.06 0.00 99.71 21:23:01 0 0.00 0.00 0.00 0.33 0.02 99.65 21:23:01 1 0.03 0.00 0.02 0.00 0.00 99.95 21:23:01 2 0.00 0.00 0.00 0.00 0.00 100.00 21:23:01 3 0.00 0.00 0.02 0.00 0.00 99.98 21:23:01 4 0.03 0.00 0.00 0.00 0.00 99.97 21:23:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:23:01 6 1.65 0.00 0.02 0.13 0.02 98.19 21:23:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:24:01 all 0.10 0.00 0.00 0.06 0.00 99.84 21:24:01 0 0.02 0.00 0.02 0.00 0.00 99.97 21:24:01 1 0.00 0.00 0.00 0.07 0.00 99.93 21:24:01 2 0.03 0.00 0.00 0.00 0.02 99.95 21:24:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:24:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:24:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:24:01 6 0.75 0.00 0.02 0.38 0.02 98.84 21:24:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:25:01 all 0.00 0.00 0.00 0.05 0.00 99.94 21:25:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:25:01 1 0.00 0.00 0.02 0.40 0.00 99.58 21:25:01 2 0.00 0.00 0.02 0.00 0.00 99.98 21:25:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:25:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:25:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:25:01 6 0.02 0.00 0.02 0.00 0.03 99.93 21:25:01 7 0.00 0.00 0.00 0.00 0.02 99.98 21:26:01 all 0.28 0.00 0.01 0.02 0.00 99.69 21:26:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:26:01 1 0.20 0.00 0.00 0.08 0.00 99.72 21:26:01 2 0.00 0.00 0.02 0.00 0.00 99.98 21:26:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:26:01 4 0.00 0.00 0.02 0.00 0.00 99.98 21:26:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:26:01 6 1.95 0.00 0.00 0.03 0.02 98.00 21:26:01 7 0.02 0.00 0.02 0.00 0.00 99.97 21:27:01 all 0.25 0.00 0.01 0.01 0.00 99.72 21:27:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:27:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:27:01 2 0.02 0.00 0.00 0.00 0.00 99.98 21:27:01 3 0.00 0.00 0.00 0.02 0.00 99.98 21:27:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:27:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:27:01 6 1.95 0.00 0.05 0.10 0.02 97.88 21:27:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:28:01 all 0.03 0.00 0.01 0.01 0.01 99.94 21:28:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:28:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:28:01 2 0.00 0.00 0.00 0.00 0.02 99.98 21:28:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:28:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:28:01 5 0.00 0.00 0.02 0.00 0.00 99.98 21:28:01 6 0.25 0.00 0.07 0.05 0.03 99.60 21:28:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:29:01 all 0.01 0.00 0.00 0.01 0.00 99.97 21:29:01 0 0.02 0.00 0.00 0.00 0.00 99.98 21:29:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:29:01 2 0.02 0.00 0.02 0.00 0.00 99.97 21:29:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:29:01 4 0.03 0.00 0.00 0.00 0.00 99.97 21:29:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:29:01 6 0.02 0.00 0.02 0.05 0.02 99.90 21:29:01 7 0.00 0.00 0.00 0.00 0.02 99.98 21:30:01 all 0.01 0.00 0.00 0.01 0.01 99.97 21:30:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:30:01 1 0.02 0.00 0.00 0.00 0.00 99.98 21:30:01 2 0.00 0.00 0.00 0.00 0.00 100.00 21:30:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:30:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:30:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:30:01 6 0.02 0.00 0.02 0.05 0.03 99.88 21:30:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:30:01 CPU %user %nice %system %iowait %steal %idle 21:31:01 all 0.01 0.00 0.01 0.00 0.00 99.97 21:31:01 0 0.02 0.00 0.00 0.00 0.00 99.98 21:31:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:31:01 2 0.02 0.00 0.02 0.00 0.00 99.97 21:31:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:31:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:31:01 5 0.02 0.00 0.02 0.00 0.00 99.97 21:31:01 6 0.00 0.00 0.02 0.03 0.02 99.93 21:31:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:32:01 all 0.01 0.00 0.01 0.00 0.00 99.98 21:32:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:32:01 1 0.02 0.00 0.00 0.00 0.02 99.97 21:32:01 2 0.02 0.00 0.00 0.00 0.02 99.97 21:32:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:32:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:32:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:32:01 6 0.02 0.00 0.03 0.03 0.03 99.88 21:32:01 7 0.00 0.00 0.02 0.00 0.00 99.98 21:33:01 all 0.01 0.00 0.00 0.01 0.00 99.97 21:33:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:33:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:33:01 2 0.03 0.00 0.00 0.00 0.00 99.97 21:33:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:33:01 4 0.00 0.00 0.02 0.00 0.02 99.97 21:33:01 5 0.05 0.00 0.00 0.00 0.00 99.95 21:33:01 6 0.00 0.00 0.02 0.03 0.02 99.93 21:33:01 7 0.00 0.00 0.02 0.00 0.00 99.98 21:34:01 all 0.00 0.00 0.00 0.01 0.00 99.98 21:34:01 0 0.00 0.00 0.00 0.00 0.02 99.98 21:34:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:34:01 2 0.00 0.00 0.00 0.00 0.00 100.00 21:34:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:34:01 4 0.02 0.00 0.00 0.02 0.00 99.97 21:34:01 5 0.00 0.00 0.02 0.00 0.00 99.98 21:34:01 6 0.00 0.00 0.00 0.05 0.03 99.92 21:34:01 7 0.02 0.00 0.00 0.00 0.02 99.97 21:35:01 all 0.04 0.00 0.01 0.01 0.01 99.93 21:35:01 0 0.02 0.00 0.02 0.00 0.00 99.97 21:35:01 1 0.27 0.00 0.00 0.00 0.00 99.73 21:35:01 2 0.03 0.00 0.00 0.00 0.00 99.97 21:35:01 3 0.00 0.00 0.02 0.00 0.00 99.98 21:35:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:35:01 5 0.02 0.00 0.02 0.00 0.00 99.97 21:35:01 6 0.03 0.00 0.05 0.03 0.02 99.87 21:35:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:36:01 all 0.19 0.00 0.00 0.01 0.00 99.79 21:36:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:36:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:36:01 2 0.02 0.00 0.00 0.00 0.02 99.97 21:36:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:36:01 4 0.00 0.00 0.00 0.00 0.00 100.00 21:36:01 5 0.00 0.00 0.02 0.00 0.00 99.98 21:36:01 6 1.50 0.00 0.02 0.05 0.03 98.40 21:36:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:37:01 all 0.26 0.00 0.00 0.01 0.01 99.71 21:37:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:37:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:37:01 2 0.02 0.00 0.02 0.00 0.00 99.97 21:37:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:37:01 4 0.03 0.00 0.00 0.07 0.02 99.88 21:37:01 5 0.03 0.00 0.00 0.00 0.02 99.95 21:37:01 6 1.99 0.00 0.00 0.05 0.02 97.95 21:37:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:38:01 all 0.25 0.00 0.01 0.01 0.00 99.72 21:38:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:38:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:38:01 2 0.02 0.00 0.02 0.00 0.00 99.97 21:38:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:38:01 4 0.00 0.00 0.02 0.03 0.02 99.93 21:38:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:38:01 6 1.97 0.00 0.00 0.08 0.00 97.95 21:38:01 7 0.00 0.00 0.00 0.00 0.02 99.98 21:39:01 all 0.06 0.00 0.01 0.01 0.00 99.92 21:39:01 0 0.02 0.00 0.02 0.00 0.00 99.97 21:39:01 1 0.02 0.00 0.00 0.00 0.00 99.98 21:39:01 2 0.02 0.00 0.00 0.00 0.00 99.98 21:39:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:39:01 4 0.03 0.00 0.02 0.05 0.02 99.88 21:39:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:39:01 6 0.40 0.00 0.03 0.00 0.02 99.55 21:39:01 7 0.02 0.00 0.02 0.00 0.00 99.97 21:40:01 all 0.01 0.00 0.01 0.01 0.01 99.97 21:40:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:40:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:40:01 2 0.02 0.00 0.00 0.00 0.02 99.97 21:40:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:40:01 4 0.03 0.00 0.03 0.05 0.03 99.85 21:40:01 5 0.00 0.00 0.02 0.00 0.00 99.98 21:40:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:40:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:41:01 all 0.01 0.00 0.01 0.00 0.00 99.98 21:41:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:41:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:41:01 2 0.02 0.00 0.02 0.00 0.00 99.97 21:41:01 3 0.00 0.00 0.00 0.00 0.02 99.98 21:41:01 4 0.02 0.00 0.02 0.03 0.02 99.92 21:41:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:41:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:41:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:41:01 CPU %user %nice %system %iowait %steal %idle 21:42:01 all 0.04 0.00 0.01 0.01 0.00 99.94 21:42:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:42:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:42:01 2 0.00 0.00 0.00 0.00 0.00 100.00 21:42:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:42:01 4 0.02 0.00 0.03 0.05 0.02 99.88 21:42:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:42:01 6 0.30 0.00 0.03 0.00 0.00 99.67 21:42:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:43:01 all 0.01 0.00 0.01 0.00 0.01 99.97 21:43:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:43:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:43:01 2 0.03 0.00 0.00 0.00 0.02 99.95 21:43:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:43:01 4 0.03 0.00 0.02 0.03 0.02 99.90 21:43:01 5 0.00 0.00 0.03 0.00 0.00 99.97 21:43:01 6 0.02 0.00 0.00 0.00 0.02 99.97 21:43:01 7 0.02 0.00 0.00 0.00 0.02 99.97 21:44:01 all 0.00 0.00 0.00 0.01 0.00 99.98 21:44:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:44:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:44:01 2 0.00 0.00 0.00 0.00 0.00 100.00 21:44:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:44:01 4 0.02 0.00 0.02 0.05 0.03 99.88 21:44:01 5 0.03 0.00 0.00 0.00 0.00 99.97 21:44:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:44:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:45:01 all 0.01 0.00 0.01 0.01 0.00 99.97 21:45:01 0 0.00 0.00 0.00 0.00 0.02 99.98 21:45:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:45:01 2 0.03 0.00 0.00 0.00 0.00 99.97 21:45:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:45:01 4 0.02 0.00 0.02 0.08 0.02 99.87 21:45:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:45:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:45:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:46:01 all 0.01 0.00 0.00 0.01 0.00 99.98 21:46:01 0 0.02 0.00 0.00 0.00 0.00 99.98 21:46:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:46:01 2 0.00 0.00 0.02 0.00 0.02 99.97 21:46:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:46:01 4 0.02 0.00 0.00 0.07 0.02 99.90 21:46:01 5 0.02 0.00 0.02 0.00 0.00 99.97 21:46:01 6 0.00 0.00 0.02 0.00 0.00 99.98 21:46:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:47:01 all 0.15 0.00 0.01 0.01 0.00 99.83 21:47:01 0 0.02 0.00 0.00 0.00 0.00 99.98 21:47:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:47:01 2 0.02 0.00 0.02 0.00 0.00 99.97 21:47:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:47:01 4 0.00 0.00 0.03 0.05 0.02 99.90 21:47:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:47:01 6 1.16 0.00 0.02 0.00 0.02 98.81 21:47:01 7 0.00 0.00 0.00 0.00 0.02 99.98 21:48:01 all 0.26 0.00 0.00 0.01 0.01 99.73 21:48:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:48:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:48:01 2 0.02 0.00 0.00 0.02 0.00 99.97 21:48:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:48:01 4 0.03 0.00 0.02 0.02 0.03 99.90 21:48:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:48:01 6 1.97 0.00 0.00 0.00 0.00 98.03 21:48:01 7 0.00 0.00 0.02 0.00 0.00 99.98 21:49:01 all 0.03 0.00 0.00 0.01 0.00 99.96 21:49:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:49:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:49:01 2 0.00 0.00 0.00 0.02 0.02 99.97 21:49:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:49:01 4 0.02 0.00 0.00 0.05 0.02 99.92 21:49:01 5 0.00 0.00 0.02 0.00 0.00 99.98 21:49:01 6 0.17 0.00 0.00 0.00 0.00 99.83 21:49:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:50:01 all 0.01 0.00 0.01 0.01 0.01 99.97 21:50:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:50:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:50:01 2 0.00 0.00 0.00 0.05 0.00 99.95 21:50:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:50:01 4 0.03 0.00 0.05 0.00 0.02 99.90 21:50:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:50:01 6 0.02 0.00 0.00 0.00 0.02 99.97 21:50:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:51:01 all 0.01 0.00 0.00 0.00 0.00 99.98 21:51:01 0 0.02 0.00 0.00 0.00 0.00 99.98 21:51:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:51:01 2 0.00 0.00 0.02 0.03 0.00 99.95 21:51:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:51:01 4 0.02 0.00 0.03 0.00 0.02 99.93 21:51:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:51:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:51:01 7 0.02 0.00 0.00 0.00 0.02 99.97 21:52:01 all 0.01 0.00 0.00 0.00 0.00 99.98 21:52:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:52:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:52:01 2 0.02 0.00 0.00 0.02 0.02 99.95 21:52:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:52:01 4 0.03 0.00 0.00 0.00 0.03 99.93 21:52:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:52:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:52:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:52:01 CPU %user %nice %system %iowait %steal %idle 21:53:01 all 0.01 0.00 0.01 0.01 0.00 99.97 21:53:01 0 0.02 0.00 0.00 0.00 0.00 99.98 21:53:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:53:01 2 0.00 0.00 0.00 0.02 0.00 99.98 21:53:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:53:01 4 0.02 0.00 0.02 0.03 0.02 99.92 21:53:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:53:01 6 0.03 0.00 0.00 0.00 0.02 99.95 21:53:01 7 0.02 0.00 0.00 0.00 0.00 99.98 21:54:01 all 0.01 0.00 0.01 0.01 0.01 99.97 21:54:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:54:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:54:01 2 0.00 0.00 0.00 0.03 0.00 99.97 21:54:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:54:01 4 0.02 0.00 0.02 0.02 0.02 99.93 21:54:01 5 0.02 0.00 0.02 0.00 0.00 99.97 21:54:01 6 0.03 0.00 0.00 0.00 0.00 99.97 21:54:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:55:01 all 0.01 0.00 0.01 0.00 0.00 99.97 21:55:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:55:01 1 0.00 0.00 0.02 0.00 0.00 99.98 21:55:01 2 0.00 0.00 0.02 0.03 0.00 99.95 21:55:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:55:01 4 0.03 0.00 0.02 0.00 0.02 99.93 21:55:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:55:01 6 0.02 0.00 0.00 0.00 0.00 99.98 21:55:01 7 0.03 0.00 0.00 0.00 0.00 99.97 21:56:01 all 0.04 0.00 0.01 0.01 0.00 99.94 21:56:01 0 0.00 0.00 0.02 0.00 0.00 99.98 21:56:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:56:01 2 0.02 0.00 0.00 0.10 0.02 99.87 21:56:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:56:01 4 0.00 0.00 0.02 0.00 0.02 99.97 21:56:01 5 0.02 0.00 0.00 0.00 0.00 99.98 21:56:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:56:01 7 0.32 0.00 0.00 0.02 0.02 99.65 21:57:01 all 0.01 0.00 0.00 0.01 0.01 99.98 21:57:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:57:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:57:01 2 0.02 0.00 0.02 0.05 0.03 99.88 21:57:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:57:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:57:01 5 0.00 0.00 0.00 0.00 0.00 100.00 21:57:01 6 0.00 0.00 0.00 0.00 0.02 99.98 21:57:01 7 0.00 0.00 0.00 0.00 0.00 100.00 21:58:01 all 0.01 0.00 0.00 0.00 0.00 99.98 21:58:01 0 0.00 0.00 0.00 0.00 0.02 99.98 21:58:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:58:01 2 0.02 0.00 0.02 0.05 0.02 99.90 21:58:01 3 0.00 0.00 0.00 0.00 0.00 100.00 21:58:01 4 0.02 0.00 0.00 0.00 0.00 99.98 21:58:01 5 0.00 0.00 0.00 0.00 0.02 99.98 21:58:01 6 0.00 0.00 0.02 0.00 0.00 99.98 21:58:01 7 0.02 0.00 0.02 0.00 0.00 99.97 21:59:01 all 0.01 0.00 0.01 0.01 0.00 99.97 21:59:01 0 0.00 0.00 0.00 0.00 0.00 100.00 21:59:01 1 0.00 0.00 0.00 0.00 0.00 100.00 21:59:01 2 0.02 0.00 0.00 0.03 0.02 99.93 21:59:01 3 0.00 0.00 0.02 0.00 0.00 99.98 21:59:01 4 0.00 0.00 0.00 0.02 0.00 99.98 21:59:01 5 0.05 0.00 0.02 0.00 0.00 99.93 21:59:01 6 0.00 0.00 0.00 0.00 0.00 100.00 21:59:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:00:01 all 0.08 0.00 0.01 0.00 0.00 99.91 22:00:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:00:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:00:01 2 0.02 0.00 0.02 0.00 0.03 99.93 22:00:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:00:01 4 0.00 0.00 0.00 0.02 0.00 99.98 22:00:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:00:01 6 0.60 0.00 0.02 0.00 0.02 99.37 22:00:01 7 0.00 0.00 0.02 0.00 0.00 99.98 22:01:01 all 0.30 0.00 0.01 0.00 0.01 99.68 22:01:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:01:01 1 0.00 0.00 0.02 0.00 0.00 99.98 22:01:01 2 0.03 0.00 0.03 0.02 0.02 99.90 22:01:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:01:01 4 0.02 0.00 0.00 0.03 0.00 99.95 22:01:01 5 0.05 0.00 0.00 0.00 0.00 99.95 22:01:01 6 1.94 0.00 0.00 0.00 0.00 98.06 22:01:01 7 0.32 0.00 0.02 0.00 0.02 99.65 22:02:01 all 0.24 0.00 0.00 0.00 0.00 99.75 22:02:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:02:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:02:01 2 0.03 0.00 0.02 0.00 0.03 99.92 22:02:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:02:01 4 0.00 0.00 0.00 0.02 0.00 99.98 22:02:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:02:01 6 1.84 0.00 0.00 0.00 0.00 98.16 22:02:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:03:01 all 0.16 0.00 0.01 0.01 0.01 99.81 22:03:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:03:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:03:01 2 0.03 0.00 0.02 0.07 0.02 99.87 22:03:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:03:01 4 0.00 0.00 0.00 0.05 0.02 99.93 22:03:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:03:01 6 1.21 0.00 0.02 0.00 0.02 98.76 22:03:01 7 0.00 0.00 0.02 0.00 0.00 99.98 22:03:01 CPU %user %nice %system %iowait %steal %idle 22:04:01 all 0.01 0.00 0.01 0.01 0.00 99.97 22:04:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:04:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:04:01 2 0.03 0.00 0.03 0.07 0.03 99.83 22:04:01 3 0.00 0.00 0.02 0.00 0.00 99.98 22:04:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:04:01 5 0.00 0.00 0.02 0.00 0.00 99.98 22:04:01 6 0.03 0.00 0.00 0.00 0.00 99.97 22:04:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:05:01 all 0.01 0.00 0.00 0.00 0.00 99.97 22:05:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:05:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:05:01 2 0.02 0.00 0.00 0.02 0.00 99.97 22:05:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:05:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:05:01 5 0.05 0.00 0.00 0.02 0.02 99.92 22:05:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:05:01 7 0.00 0.00 0.00 0.00 0.02 99.98 22:06:01 all 0.01 0.00 0.00 0.01 0.01 99.97 22:06:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:06:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:06:01 2 0.02 0.00 0.00 0.02 0.02 99.95 22:06:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:06:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:06:01 5 0.02 0.00 0.02 0.02 0.02 99.93 22:06:01 6 0.02 0.00 0.00 0.00 0.02 99.97 22:06:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:07:01 all 0.01 0.00 0.00 0.01 0.00 99.98 22:07:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:07:01 1 0.02 0.00 0.00 0.00 0.00 99.98 22:07:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:07:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:07:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:07:01 5 0.02 0.00 0.00 0.05 0.02 99.92 22:07:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:07:01 7 0.02 0.00 0.02 0.00 0.00 99.97 22:08:01 all 0.01 0.00 0.01 0.01 0.00 99.97 22:08:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:08:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:08:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:08:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:08:01 4 0.00 0.00 0.02 0.00 0.00 99.98 22:08:01 5 0.00 0.00 0.02 0.05 0.00 99.93 22:08:01 6 0.03 0.00 0.02 0.00 0.03 99.92 22:08:01 7 0.00 0.00 0.02 0.00 0.00 99.98 22:09:01 all 0.02 0.00 0.00 0.01 0.00 99.97 22:09:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:09:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:09:01 2 0.03 0.00 0.00 0.00 0.00 99.97 22:09:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:09:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:09:01 5 0.03 0.00 0.02 0.05 0.00 99.90 22:09:01 6 0.03 0.00 0.00 0.00 0.02 99.95 22:09:01 7 0.03 0.00 0.02 0.00 0.00 99.95 22:10:01 all 0.01 0.00 0.00 0.01 0.00 99.98 22:10:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:10:01 1 0.00 0.00 0.02 0.00 0.00 99.98 22:10:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:10:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:10:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:10:01 5 0.00 0.00 0.00 0.05 0.00 99.95 22:10:01 6 0.03 0.00 0.00 0.00 0.02 99.95 22:10:01 7 0.00 0.00 0.00 0.00 0.02 99.98 22:11:01 all 0.08 0.00 0.00 0.01 0.01 99.90 22:11:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:11:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:11:01 2 0.02 0.00 0.00 0.00 0.02 99.97 22:11:01 3 0.00 0.00 0.02 0.00 0.00 99.98 22:11:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:11:01 5 0.02 0.00 0.00 0.05 0.00 99.93 22:11:01 6 0.58 0.00 0.02 0.00 0.02 99.38 22:11:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:12:01 all 0.20 0.00 0.01 0.01 0.00 99.78 22:12:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:12:01 1 0.00 0.00 0.00 0.00 0.02 99.98 22:12:01 2 0.00 0.00 0.02 0.00 0.00 99.98 22:12:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:12:01 4 0.00 0.00 0.00 0.00 0.02 99.98 22:12:01 5 0.02 0.00 0.00 0.03 0.02 99.93 22:12:01 6 1.56 0.00 0.02 0.03 0.03 98.35 22:12:01 7 0.03 0.00 0.00 0.00 0.00 99.97 22:13:01 all 0.16 0.00 0.01 0.00 0.01 99.82 22:13:01 0 0.05 0.00 0.02 0.00 0.00 99.93 22:13:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:13:01 2 0.12 0.00 0.02 0.00 0.00 99.87 22:13:01 3 0.08 0.00 0.02 0.00 0.02 99.88 22:13:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:13:01 5 0.45 0.00 0.02 0.03 0.00 99.50 22:13:01 6 0.48 0.00 0.07 0.00 0.03 99.42 22:13:01 7 0.05 0.00 0.00 0.00 0.02 99.93 22:14:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:14:01 0 0.02 0.00 0.00 0.00 0.02 99.97 22:14:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:14:01 2 0.02 0.00 0.00 0.00 0.02 99.97 22:14:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:14:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:14:01 5 0.00 0.00 0.00 0.02 0.00 99.98 22:14:01 6 0.02 0.00 0.02 0.02 0.02 99.93 22:14:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:14:01 CPU %user %nice %system %iowait %steal %idle 22:15:01 all 0.01 0.00 0.01 0.00 0.00 99.97 22:15:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:15:01 1 0.00 0.00 0.02 0.00 0.00 99.98 22:15:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:15:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:15:01 4 0.02 0.00 0.02 0.00 0.00 99.97 22:15:01 5 0.00 0.00 0.02 0.02 0.00 99.97 22:15:01 6 0.00 0.00 0.03 0.00 0.02 99.95 22:15:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:16:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:16:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:16:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:16:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:16:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:16:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:16:01 5 0.03 0.00 0.00 0.02 0.00 99.95 22:16:01 6 0.02 0.00 0.02 0.02 0.03 99.92 22:16:01 7 0.02 0.00 0.02 0.00 0.00 99.97 22:17:01 all 0.01 0.00 0.00 0.01 0.00 99.98 22:17:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:17:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:17:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:17:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:17:01 4 0.00 0.00 0.02 0.00 0.00 99.98 22:17:01 5 0.02 0.00 0.00 0.05 0.00 99.93 22:17:01 6 0.02 0.00 0.00 0.00 0.02 99.97 22:17:01 7 0.00 0.00 0.00 0.00 0.02 99.98 22:18:01 all 0.01 0.00 0.01 0.00 0.01 99.97 22:18:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:18:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:18:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:18:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:18:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:18:01 5 0.02 0.00 0.00 0.02 0.00 99.97 22:18:01 6 0.07 0.00 0.00 0.02 0.02 99.90 22:18:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:19:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:19:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:19:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:19:01 2 0.00 0.00 0.00 0.00 0.02 99.98 22:19:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:19:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:19:01 5 0.00 0.00 0.00 0.02 0.00 99.98 22:19:01 6 0.00 0.00 0.02 0.00 0.03 99.95 22:19:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:20:01 all 0.01 0.00 0.01 0.00 0.00 99.97 22:20:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:20:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:20:01 2 0.03 0.00 0.00 0.00 0.00 99.97 22:20:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:20:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:20:01 5 0.02 0.00 0.00 0.02 0.00 99.97 22:20:01 6 0.02 0.00 0.02 0.00 0.02 99.95 22:20:01 7 0.02 0.00 0.00 0.00 0.02 99.97 22:21:01 all 0.21 0.00 0.01 0.00 0.00 99.78 22:21:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:21:01 1 0.00 0.00 0.00 0.02 0.00 99.98 22:21:01 2 1.60 0.00 0.02 0.00 0.02 98.37 22:21:01 3 0.00 0.00 0.02 0.00 0.00 99.98 22:21:01 4 0.00 0.00 0.02 0.00 0.02 99.97 22:21:01 5 0.02 0.00 0.00 0.02 0.00 99.97 22:21:01 6 0.02 0.00 0.00 0.02 0.02 99.95 22:21:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:22:01 all 0.06 0.00 0.00 0.00 0.00 99.93 22:22:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:22:01 1 0.00 0.00 0.00 0.00 0.00 100.00 22:22:01 2 0.40 0.00 0.00 0.00 0.00 99.60 22:22:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:22:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:22:01 5 0.03 0.00 0.02 0.02 0.00 99.93 22:22:01 6 0.00 0.00 0.02 0.00 0.02 99.97 22:22:01 7 0.03 0.00 0.02 0.00 0.00 99.95 22:23:01 all 0.01 0.00 0.01 0.00 0.00 99.98 22:23:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:23:01 1 0.00 0.00 0.02 0.00 0.00 99.98 22:23:01 2 0.00 0.00 0.02 0.00 0.00 99.98 22:23:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:23:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:23:01 5 0.00 0.00 0.00 0.02 0.00 99.98 22:23:01 6 0.00 0.00 0.02 0.02 0.02 99.95 22:23:01 7 0.00 0.00 0.02 0.00 0.02 99.97 22:24:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:24:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:24:01 1 0.02 0.00 0.02 0.00 0.00 99.97 22:24:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:24:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:24:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:24:01 5 0.00 0.00 0.02 0.02 0.00 99.97 22:24:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:24:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:25:01 all 0.01 0.00 0.01 0.02 0.00 99.96 22:25:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:25:01 1 0.02 0.00 0.00 0.02 0.03 99.93 22:25:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:25:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:25:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:25:01 5 0.02 0.00 0.00 0.10 0.00 99.88 22:25:01 6 0.00 0.00 0.00 0.02 0.00 99.98 22:25:01 7 0.02 0.00 0.02 0.00 0.02 99.95 22:25:01 CPU %user %nice %system %iowait %steal %idle 22:26:01 all 0.05 0.00 0.01 0.00 0.00 99.94 22:26:01 0 0.02 0.00 0.02 0.00 0.02 99.95 22:26:01 1 0.03 0.00 0.03 0.03 0.02 99.88 22:26:01 2 0.02 0.00 0.00 0.00 0.02 99.97 22:26:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:26:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:26:01 5 0.02 0.00 0.02 0.00 0.02 99.95 22:26:01 6 0.02 0.00 0.00 0.00 0.02 99.97 22:26:01 7 0.32 0.00 0.00 0.00 0.00 99.68 22:27:01 all 0.26 0.00 0.00 0.00 0.01 99.73 22:27:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:27:01 1 0.02 0.00 0.02 0.02 0.02 99.93 22:27:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:27:01 3 0.00 0.00 0.02 0.00 0.00 99.98 22:27:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:27:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:27:01 6 0.00 0.00 0.00 0.02 0.00 99.98 22:27:01 7 2.00 0.00 0.00 0.00 0.00 98.00 22:28:01 all 0.21 0.00 0.00 0.00 0.00 99.77 22:28:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:28:01 1 0.02 0.00 0.02 0.03 0.02 99.92 22:28:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:28:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:28:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:28:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:28:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:28:01 7 1.65 0.00 0.02 0.00 0.02 98.32 22:29:01 all 0.01 0.00 0.01 0.01 0.00 99.97 22:29:01 0 0.02 0.00 0.02 0.00 0.00 99.97 22:29:01 1 0.00 0.00 0.02 0.03 0.02 99.93 22:29:01 2 0.00 0.00 0.02 0.00 0.00 99.98 22:29:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:29:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:29:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:29:01 6 0.00 0.00 0.00 0.02 0.00 99.98 22:29:01 7 0.03 0.00 0.00 0.00 0.00 99.97 22:30:01 all 0.21 0.00 0.01 0.00 0.01 99.77 22:30:01 0 0.03 0.00 0.02 0.00 0.00 99.95 22:30:01 1 0.02 0.00 0.02 0.02 0.02 99.93 22:30:01 2 0.00 0.00 0.00 0.00 0.02 99.98 22:30:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:30:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:30:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:30:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:30:01 7 1.63 0.00 0.03 0.00 0.00 98.34 22:31:01 all 0.27 0.00 0.00 0.01 0.00 99.72 22:31:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:31:01 1 0.02 0.00 0.02 0.07 0.02 99.88 22:31:01 2 0.03 0.00 0.00 0.00 0.00 99.97 22:31:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:31:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:31:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:31:01 6 0.02 0.00 0.02 0.00 0.00 99.97 22:31:01 7 2.00 0.00 0.00 0.00 0.02 97.98 22:32:01 all 0.15 0.00 0.00 0.01 0.01 99.83 22:32:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:32:01 1 0.02 0.00 0.02 0.05 0.02 99.90 22:32:01 2 0.00 0.00 0.02 0.00 0.00 99.98 22:32:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:32:01 4 0.02 0.00 0.02 0.00 0.02 99.95 22:32:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:32:01 6 0.00 0.00 0.00 0.02 0.00 99.98 22:32:01 7 1.16 0.00 0.00 0.00 0.00 98.84 22:33:01 all 0.24 0.00 0.01 0.00 0.00 99.75 22:33:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:33:01 1 0.00 0.00 0.03 0.02 0.02 99.93 22:33:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:33:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:33:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:33:01 5 0.02 0.00 0.02 0.00 0.00 99.97 22:33:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:33:01 7 1.87 0.00 0.00 0.00 0.02 98.11 22:34:01 all 0.05 0.00 0.01 0.00 0.00 99.93 22:34:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:34:01 1 0.03 0.00 0.00 0.02 0.02 99.93 22:34:01 2 0.00 0.00 0.02 0.00 0.02 99.97 22:34:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:34:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:34:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:34:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:34:01 7 0.38 0.00 0.00 0.00 0.00 99.62 22:35:01 all 0.04 0.00 0.01 0.01 0.00 99.93 22:35:01 0 0.13 0.00 0.00 0.00 0.00 99.87 22:35:01 1 0.08 0.00 0.03 0.10 0.03 99.75 22:35:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:35:01 3 0.00 0.00 0.03 0.00 0.00 99.97 22:35:01 4 0.02 0.00 0.02 0.00 0.00 99.97 22:35:01 5 0.02 0.00 0.02 0.00 0.00 99.97 22:35:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:35:01 7 0.07 0.00 0.02 0.00 0.00 99.92 22:36:01 all 0.21 0.00 0.00 0.00 0.00 99.78 22:36:01 0 0.03 0.00 0.00 0.00 0.00 99.97 22:36:01 1 0.00 0.00 0.00 0.02 0.02 99.97 22:36:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:36:01 3 1.60 0.00 0.00 0.00 0.02 98.39 22:36:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:36:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:36:01 6 0.00 0.00 0.02 0.02 0.00 99.97 22:36:01 7 0.03 0.00 0.00 0.00 0.02 99.95 22:36:01 CPU %user %nice %system %iowait %steal %idle 22:37:01 all 0.22 0.00 0.01 0.00 0.00 99.76 22:37:01 0 0.00 0.00 0.02 0.00 0.02 99.97 22:37:01 1 0.00 0.00 0.02 0.03 0.02 99.93 22:37:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:37:01 3 1.07 0.00 0.00 0.00 0.00 98.93 22:37:01 4 0.63 0.00 0.05 0.00 0.00 99.32 22:37:01 5 0.02 0.00 0.02 0.00 0.00 99.97 22:37:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:37:01 7 0.00 0.00 0.02 0.00 0.00 99.98 22:38:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:38:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:38:01 1 0.02 0.00 0.02 0.02 0.02 99.93 22:38:01 2 0.00 0.00 0.00 0.00 0.02 99.98 22:38:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:38:01 4 0.02 0.00 0.00 0.00 0.02 99.97 22:38:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:38:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:38:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:39:01 all 0.01 0.00 0.01 0.00 0.00 99.98 22:39:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:39:01 1 0.02 0.00 0.02 0.03 0.02 99.92 22:39:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:39:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:39:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:39:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:39:01 6 0.00 0.00 0.00 0.00 0.02 99.98 22:39:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:40:01 all 0.00 0.00 0.00 0.00 0.00 99.99 22:40:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:40:01 1 0.00 0.00 0.02 0.02 0.02 99.95 22:40:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:40:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:40:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:40:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:40:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:40:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:41:01 all 0.18 0.00 0.00 0.00 0.00 99.81 22:41:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:41:01 1 0.00 0.00 0.02 0.02 0.02 99.95 22:41:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:41:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:41:01 4 1.32 0.00 0.00 0.00 0.02 98.66 22:41:01 5 0.03 0.00 0.02 0.00 0.00 99.95 22:41:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:41:01 7 0.02 0.00 0.00 0.00 0.02 99.97 22:42:01 all 0.22 0.00 0.00 0.00 0.01 99.77 22:42:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:42:01 1 0.02 0.00 0.00 0.02 0.02 99.95 22:42:01 2 0.00 0.00 0.02 0.00 0.02 99.97 22:42:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:42:01 4 1.71 0.00 0.02 0.00 0.00 98.27 22:42:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:42:01 6 0.00 0.00 0.00 0.02 0.00 99.98 22:42:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:43:01 all 0.08 0.00 0.00 0.00 0.00 99.91 22:43:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:43:01 1 0.00 0.00 0.02 0.03 0.03 99.92 22:43:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:43:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:43:01 4 0.60 0.00 0.00 0.00 0.02 99.39 22:43:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:43:01 6 0.00 0.00 0.02 0.00 0.00 99.98 22:43:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:44:01 all 0.01 0.00 0.01 0.00 0.01 99.98 22:44:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:44:01 1 0.02 0.00 0.02 0.02 0.02 99.93 22:44:01 2 0.00 0.00 0.02 0.00 0.00 99.98 22:44:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:44:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:44:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:44:01 6 0.02 0.00 0.00 0.02 0.00 99.97 22:44:01 7 0.02 0.00 0.02 0.00 0.00 99.97 22:45:01 all 0.15 0.00 0.01 0.00 0.00 99.84 22:45:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:45:01 1 0.02 0.00 0.02 0.02 0.02 99.93 22:45:01 2 0.02 0.00 0.02 0.00 0.00 99.97 22:45:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:45:01 4 1.09 0.00 0.03 0.00 0.00 98.88 22:45:01 5 0.00 0.00 0.02 0.00 0.00 99.98 22:45:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:45:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:46:01 all 0.20 0.00 0.01 0.00 0.00 99.79 22:46:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:46:01 1 0.02 0.00 0.02 0.03 0.02 99.92 22:46:01 2 0.00 0.00 0.00 0.00 0.02 99.98 22:46:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:46:01 4 1.48 0.00 0.00 0.00 0.02 98.50 22:46:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:46:01 6 0.03 0.00 0.00 0.00 0.00 99.97 22:46:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:47:01 all 0.26 0.00 0.00 0.00 0.00 99.73 22:47:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:47:01 1 0.02 0.00 0.03 0.03 0.03 99.88 22:47:01 2 0.00 0.00 0.02 0.00 0.00 99.98 22:47:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:47:01 4 1.97 0.00 0.00 0.00 0.00 98.03 22:47:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:47:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:47:01 7 0.00 0.00 0.02 0.00 0.02 99.97 22:47:01 CPU %user %nice %system %iowait %steal %idle 22:48:01 all 0.24 0.00 0.01 0.00 0.01 99.74 22:48:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:48:01 1 0.02 0.00 0.02 0.02 0.02 99.93 22:48:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:48:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:48:01 4 1.73 0.00 0.05 0.00 0.02 98.21 22:48:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:48:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:48:01 7 0.10 0.00 0.00 0.00 0.00 99.90 22:49:01 all 0.03 0.00 0.01 0.00 0.00 99.95 22:49:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:49:01 1 0.02 0.00 0.03 0.03 0.02 99.90 22:49:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:49:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:49:01 4 0.22 0.00 0.02 0.00 0.00 99.77 22:49:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:49:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:49:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:50:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:50:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:50:01 1 0.02 0.00 0.02 0.02 0.02 99.93 22:50:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:50:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:50:01 4 0.00 0.00 0.02 0.00 0.00 99.98 22:50:01 5 0.03 0.00 0.02 0.00 0.00 99.95 22:50:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:50:01 7 0.02 0.00 0.00 0.00 0.00 99.98 22:51:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:51:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:51:01 1 0.02 0.00 0.00 0.03 0.02 99.93 22:51:01 2 0.00 0.00 0.00 0.00 0.02 99.98 22:51:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:51:01 4 0.00 0.00 0.00 0.00 0.02 99.98 22:51:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:51:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:51:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:52:01 all 0.01 0.00 0.01 0.00 0.00 99.97 22:52:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:52:01 1 0.02 0.00 0.00 0.02 0.02 99.95 22:52:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:52:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:52:01 4 0.02 0.00 0.02 0.00 0.00 99.97 22:52:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:52:01 6 0.02 0.00 0.02 0.00 0.02 99.95 22:52:01 7 0.02 0.00 0.02 0.00 0.00 99.97 22:53:01 all 0.01 0.00 0.00 0.01 0.00 99.97 22:53:01 0 0.02 0.00 0.02 0.00 0.00 99.97 22:53:01 1 0.02 0.00 0.03 0.10 0.02 99.83 22:53:01 2 0.03 0.00 0.00 0.00 0.00 99.97 22:53:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:53:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:53:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:53:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:53:01 7 0.02 0.00 0.00 0.00 0.02 99.97 22:54:01 all 0.01 0.00 0.00 0.00 0.01 99.98 22:54:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:54:01 1 0.00 0.00 0.02 0.03 0.03 99.92 22:54:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:54:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:54:01 4 0.02 0.00 0.00 0.00 0.02 99.97 22:54:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:54:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:54:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:55:01 all 0.01 0.00 0.00 0.00 0.00 99.98 22:55:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:55:01 1 0.02 0.00 0.03 0.02 0.02 99.92 22:55:01 2 0.00 0.00 0.02 0.02 0.02 99.95 22:55:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:55:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:55:01 5 0.00 0.00 0.00 0.00 0.00 100.00 22:55:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:55:01 7 0.03 0.00 0.00 0.00 0.00 99.97 22:56:01 all 0.01 0.00 0.01 0.00 0.00 99.97 22:56:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:56:01 1 0.02 0.00 0.02 0.03 0.02 99.92 22:56:01 2 0.02 0.00 0.02 0.02 0.00 99.95 22:56:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:56:01 4 0.02 0.00 0.03 0.00 0.02 99.93 22:56:01 5 0.02 0.00 0.00 0.00 0.00 99.98 22:56:01 6 0.02 0.00 0.00 0.00 0.00 99.98 22:56:01 7 0.02 0.00 0.02 0.00 0.00 99.97 22:57:01 all 0.07 0.00 0.01 0.00 0.00 99.91 22:57:01 0 0.02 0.00 0.00 0.00 0.00 99.98 22:57:01 1 0.02 0.00 0.00 0.02 0.02 99.95 22:57:01 2 0.02 0.00 0.00 0.00 0.00 99.98 22:57:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:57:01 4 0.02 0.00 0.00 0.00 0.00 99.98 22:57:01 5 0.53 0.00 0.05 0.00 0.02 99.40 22:57:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:57:01 7 0.00 0.00 0.00 0.00 0.00 100.00 22:58:01 all 0.09 0.00 0.01 0.00 0.01 99.90 22:58:01 0 0.00 0.00 0.02 0.00 0.00 99.98 22:58:01 1 0.00 0.00 0.00 0.02 0.02 99.97 22:58:01 2 0.00 0.00 0.00 0.00 0.00 100.00 22:58:01 3 0.00 0.00 0.00 0.00 0.00 100.00 22:58:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:58:01 5 0.68 0.00 0.02 0.00 0.00 99.30 22:58:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:58:01 7 0.02 0.00 0.02 0.00 0.02 99.95 22:58:01 CPU %user %nice %system %iowait %steal %idle 22:59:01 all 0.07 0.00 0.01 0.00 0.00 99.91 22:59:01 0 0.00 0.00 0.00 0.00 0.00 100.00 22:59:01 1 0.03 0.00 0.02 0.02 0.02 99.92 22:59:01 2 0.00 0.00 0.00 0.00 0.02 99.98 22:59:01 3 0.02 0.00 0.00 0.00 0.00 99.98 22:59:01 4 0.00 0.00 0.00 0.00 0.00 100.00 22:59:01 5 0.52 0.00 0.05 0.00 0.02 99.42 22:59:01 6 0.00 0.00 0.00 0.00 0.00 100.00 22:59:01 7 0.02 0.00 0.00 0.00 0.00 99.98 23:00:01 all 0.00 0.00 0.01 0.00 0.00 99.98 23:00:01 0 0.00 0.00 0.02 0.00 0.00 99.98 23:00:01 1 0.00 0.00 0.00 0.03 0.02 99.95 23:00:01 2 0.02 0.00 0.00 0.00 0.00 99.98 23:00:01 3 0.00 0.00 0.00 0.00 0.00 100.00 23:00:01 4 0.00 0.00 0.00 0.00 0.00 100.00 23:00:01 5 0.00 0.00 0.02 0.00 0.00 99.98 23:00:01 6 0.00 0.00 0.00 0.00 0.00 100.00 23:00:01 7 0.00 0.00 0.02 0.00 0.00 99.98 23:01:01 all 0.14 0.00 0.01 0.00 0.01 99.83 23:01:01 0 0.02 0.00 0.00 0.00 0.02 99.97 23:01:01 1 0.02 0.00 0.02 0.03 0.03 99.90 23:01:01 2 0.02 0.00 0.02 0.00 0.00 99.97 23:01:01 3 0.00 0.00 0.00 0.00 0.00 100.00 23:01:01 4 0.00 0.00 0.00 0.00 0.00 100.00 23:01:01 5 1.09 0.00 0.05 0.00 0.00 98.86 23:01:01 6 0.00 0.00 0.02 0.00 0.00 99.98 23:01:01 7 0.02 0.00 0.02 0.00 0.02 99.95 23:02:01 all 0.19 0.00 0.00 0.00 0.00 99.80 23:02:01 0 0.00 0.00 0.00 0.00 0.00 100.00 23:02:01 1 0.03 0.00 0.02 0.03 0.02 99.90 23:02:01 2 0.00 0.00 0.00 0.00 0.02 99.98 23:02:01 3 0.00 0.00 0.00 0.00 0.00 100.00 23:02:01 4 0.00 0.00 0.00 0.00 0.00 100.00 23:02:01 5 1.48 0.00 0.00 0.00 0.00 98.52 23:02:01 6 0.00 0.00 0.00 0.00 0.00 100.00 23:02:01 7 0.00 0.00 0.02 0.00 0.00 99.98 23:03:01 all 0.19 0.00 0.00 0.01 0.00 99.79 23:03:01 0 0.02 0.00 0.00 0.00 0.00 99.98 23:03:01 1 0.03 0.00 0.00 0.10 0.03 99.83 23:03:01 2 0.02 0.00 0.00 0.00 0.00 99.98 23:03:01 3 0.00 0.00 0.00 0.00 0.00 100.00 23:03:01 4 0.02 0.00 0.00 0.00 0.00 99.98 23:03:01 5 1.42 0.00 0.00 0.00 0.02 98.57 23:03:01 6 0.02 0.00 0.00 0.00 0.00 99.98 23:03:01 7 0.00 0.00 0.00 0.00 0.00 100.00 23:04:01 all 0.00 0.00 0.01 0.00 0.00 99.99 23:04:01 0 0.00 0.00 0.02 0.00 0.00 99.98 23:04:01 1 0.00 0.00 0.02 0.03 0.02 99.93 23:04:01 2 0.00 0.00 0.02 0.00 0.00 99.98 23:04:01 3 0.00 0.00 0.00 0.00 0.00 100.00 23:04:01 4 0.00 0.00 0.00 0.00 0.00 100.00 23:04:01 5 0.00 0.00 0.02 0.00 0.00 99.98 23:04:01 6 0.00 0.00 0.00 0.00 0.00 100.00 23:04:01 7 0.00 0.00 0.02 0.00 0.00 99.98 23:05:01 all 0.17 0.00 0.01 0.00 0.01 99.81 23:05:01 0 0.00 0.00 0.00 0.00 0.00 100.00 23:05:01 1 0.02 0.00 0.02 0.02 0.02 99.93 23:05:01 2 0.00 0.00 0.00 0.00 0.00 100.00 23:05:01 3 0.02 0.00 0.02 0.00 0.00 99.97 23:05:01 4 0.02 0.00 0.00 0.00 0.02 99.97 23:05:01 5 1.31 0.00 0.03 0.00 0.00 98.66 23:05:01 6 0.00 0.00 0.02 0.00 0.00 99.98 23:05:01 7 0.03 0.00 0.02 0.00 0.00 99.95 23:06:02 all 0.11 0.00 0.01 0.00 0.00 99.87 23:06:02 0 0.02 0.00 0.00 0.00 0.00 99.98 23:06:02 1 0.02 0.00 0.02 0.03 0.02 99.92 23:06:02 2 0.07 0.00 0.00 0.00 0.02 99.92 23:06:02 3 0.00 0.00 0.00 0.00 0.00 100.00 23:06:02 4 0.00 0.00 0.00 0.00 0.00 100.00 23:06:02 5 0.76 0.00 0.00 0.00 0.02 99.22 23:06:02 6 0.02 0.00 0.00 0.00 0.00 99.98 23:06:02 7 0.00 0.00 0.00 0.00 0.02 99.98 23:07:01 all 0.00 0.00 0.00 0.00 0.00 99.99 23:07:01 0 0.00 0.00 0.00 0.00 0.00 100.00 23:07:01 1 0.02 0.00 0.00 0.05 0.03 99.90 23:07:01 2 0.00 0.00 0.00 0.00 0.00 100.00 23:07:01 3 0.00 0.00 0.02 0.00 0.00 99.98 23:07:01 4 0.00 0.00 0.00 0.00 0.00 100.00 23:07:01 5 0.02 0.00 0.00 0.00 0.00 99.98 23:07:01 6 0.00 0.00 0.00 0.00 0.00 100.00 23:07:01 7 0.00 0.00 0.02 0.00 0.00 99.98 23:08:01 all 0.01 0.00 0.00 0.00 0.01 99.98 23:08:01 0 0.00 0.00 0.00 0.00 0.00 100.00 23:08:01 1 0.02 0.00 0.03 0.02 0.02 99.92 23:08:01 2 0.00 0.00 0.00 0.00 0.00 100.00 23:08:01 3 0.00 0.00 0.00 0.00 0.00 100.00 23:08:01 4 0.00 0.00 0.02 0.00 0.00 99.98 23:08:01 5 0.03 0.00 0.00 0.00 0.00 99.97 23:08:01 6 0.00 0.00 0.00 0.00 0.00 100.00 23:08:01 7 0.02 0.00 0.00 0.00 0.00 99.98 23:09:01 all 0.02 0.00 0.01 0.00 0.00 99.97 23:09:01 0 0.02 0.00 0.00 0.00 0.00 99.98 23:09:01 1 0.05 0.00 0.02 0.03 0.02 99.88 23:09:01 2 0.05 0.00 0.02 0.00 0.00 99.93 23:09:01 3 0.00 0.00 0.00 0.00 0.00 100.00 23:09:01 4 0.00 0.00 0.00 0.00 0.00 100.00 23:09:01 5 0.02 0.00 0.00 0.00 0.02 99.97 23:09:01 6 0.02 0.00 0.00 0.00 0.00 99.98 23:09:01 7 0.02 0.00 0.00 0.00 0.00 99.98 23:09:01 CPU %user %nice %system %iowait %steal %idle 23:10:01 all 0.22 0.00 0.03 0.00 0.00 99.74 23:10:01 0 0.17 0.00 0.05 0.02 0.00 99.77 23:10:01 1 0.15 0.00 0.03 0.02 0.02 99.78 23:10:01 2 0.17 0.00 0.10 0.00 0.00 99.73 23:10:01 3 0.57 0.00 0.03 0.02 0.02 99.37 23:10:01 4 0.05 0.00 0.03 0.00 0.00 99.92 23:10:01 5 0.02 0.00 0.00 0.00 0.00 99.98 23:10:01 6 0.63 0.00 0.02 0.00 0.00 99.35 23:10:01 7 0.03 0.00 0.02 0.00 0.00 99.95 23:11:01 all 11.46 0.00 0.81 0.65 0.03 87.04 23:11:01 0 31.12 0.00 1.72 0.40 0.05 66.71 23:11:01 1 2.58 0.00 0.62 3.44 0.05 93.31 23:11:01 2 9.28 0.00 0.70 0.15 0.03 89.84 23:11:01 3 2.19 0.00 0.32 0.28 0.02 97.19 23:11:01 4 1.32 0.00 0.28 0.18 0.02 98.20 23:11:01 5 8.19 0.00 0.80 0.10 0.02 90.89 23:11:01 6 18.49 0.00 1.08 0.45 0.05 79.93 23:11:01 7 18.44 0.00 0.95 0.22 0.05 80.35 23:12:01 all 13.08 0.00 3.42 0.55 0.05 82.90 23:12:01 0 13.61 0.00 5.07 0.54 0.05 80.73 23:12:01 1 10.26 0.00 3.12 2.06 0.05 84.52 23:12:01 2 13.85 0.00 2.73 0.54 0.05 82.82 23:12:01 3 8.89 0.00 3.27 0.12 0.05 87.68 23:12:01 4 5.53 0.00 3.57 0.32 0.03 90.54 23:12:01 5 8.92 0.00 2.33 0.05 0.05 88.64 23:12:01 6 6.92 0.00 2.70 0.00 0.03 90.34 23:12:01 7 36.67 0.00 4.59 0.79 0.07 57.88 23:13:01 all 21.13 0.00 5.86 2.85 0.08 70.07 23:13:01 0 20.33 0.00 5.07 1.00 0.07 73.53 23:13:01 1 23.61 0.00 5.62 4.11 0.08 66.58 23:13:01 2 21.26 0.00 6.12 4.08 0.08 68.45 23:13:01 3 20.21 0.00 6.10 1.29 0.07 72.33 23:13:01 4 22.73 0.00 6.12 1.12 0.10 69.92 23:13:01 5 17.92 0.00 5.10 0.61 0.08 76.29 23:13:01 6 19.08 0.00 6.85 10.16 0.07 63.84 23:13:01 7 23.91 0.00 5.89 0.46 0.08 69.65 23:14:01 all 17.78 0.00 1.62 0.13 0.07 80.41 23:14:01 0 15.27 0.00 1.47 0.02 0.08 83.16 23:14:01 1 21.76 0.00 1.77 0.03 0.07 76.37 23:14:01 2 14.77 0.00 1.05 0.02 0.05 84.11 23:14:01 3 12.36 0.00 0.95 0.03 0.05 86.60 23:14:01 4 21.08 0.00 1.91 0.72 0.05 76.25 23:14:01 5 11.57 0.00 1.39 0.05 0.07 86.92 23:14:01 6 23.29 0.00 2.00 0.15 0.08 74.47 23:14:01 7 22.12 0.00 2.40 0.00 0.08 75.39 23:15:01 all 1.07 0.00 0.15 0.02 0.05 98.71 23:15:01 0 1.37 0.00 0.22 0.00 0.05 98.36 23:15:01 1 1.05 0.00 0.23 0.02 0.07 98.63 23:15:01 2 1.54 0.00 0.15 0.02 0.03 98.26 23:15:01 3 0.94 0.00 0.08 0.02 0.03 98.93 23:15:01 4 0.85 0.00 0.08 0.00 0.03 99.03 23:15:01 5 0.58 0.00 0.13 0.00 0.05 99.23 23:15:01 6 0.77 0.00 0.10 0.12 0.02 99.00 23:15:01 7 1.43 0.00 0.20 0.00 0.08 98.28 23:16:01 all 4.29 0.00 0.70 0.32 0.04 94.66 23:16:01 0 6.08 0.00 0.65 0.12 0.05 93.11 23:16:01 1 1.58 0.00 0.73 0.10 0.07 97.52 23:16:01 2 1.05 0.00 0.58 0.33 0.05 97.98 23:16:01 3 0.93 0.00 0.62 0.28 0.05 98.11 23:16:01 4 1.82 0.00 0.63 0.90 0.03 96.62 23:16:01 5 16.26 0.00 1.17 0.18 0.07 82.32 23:16:01 6 2.30 0.00 0.65 0.27 0.03 96.75 23:16:01 7 4.24 0.00 0.58 0.37 0.03 94.77 Average: all 0.49 0.00 0.08 0.09 0.01 99.32 Average: 0 0.55 0.00 0.09 0.51 0.00 98.84 Average: 1 0.39 0.00 0.08 0.07 0.01 99.44 Average: 2 0.42 0.00 0.08 0.04 0.01 99.45 Average: 3 0.31 0.00 0.07 0.01 0.00 99.61 Average: 4 0.43 0.00 0.08 0.03 0.01 99.46 Average: 5 0.48 0.00 0.07 0.01 0.00 99.44 Average: 6 0.62 0.00 0.09 0.08 0.01 99.20 Average: 7 0.76 0.00 0.10 0.01 0.01 99.13