Started by timer Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-13055 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-xJkWDrGxpnWj/agent.2128 SSH_AGENT_PID=2130 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14487948086648757222.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14487948086648757222.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/policy/docker.git > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 Avoid second fetch > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=30 Commit message: "Remove Dmaap configurations from CSITs" > git rev-list --no-walk caa7adc30ed054d2a5cfea4a1b9a265d5cfb6785 # timeout=10 provisioning config files... copy managed file [npmrc] to file:/home/jenkins/.npmrc copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1182142332643378936.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-ONT8 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ONT8/bin to PATH Generating Requirements File Python 3.10.6 pip 23.3.2 from /tmp/venv-ONT8/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.2.1 aspy.yaml==1.3.0 attrs==23.2.0 autopage==0.5.2 beautifulsoup4==4.12.3 boto3==1.34.21 botocore==1.34.21 bs4==0.0.2 cachetools==5.3.2 certifi==2023.11.17 cffi==1.16.0 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.3.2 click==8.1.7 cliff==4.5.0 cmd2==2.4.3 cryptography==3.3.2 debtcollector==2.5.0 decorator==5.1.1 defusedxml==0.7.1 Deprecated==1.2.14 distlib==0.3.8 dnspython==2.4.2 docker==4.2.2 dogpile.cache==1.3.0 email-validator==2.1.0.post1 filelock==3.13.1 future==0.18.3 gitdb==4.0.11 GitPython==3.1.41 google-auth==2.26.2 httplib2==0.22.0 identify==2.5.33 idna==3.6 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.3 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==2.4 jsonschema==4.21.0 jsonschema-specifications==2023.12.1 keystoneauth1==5.5.0 kubernetes==29.0.0 lftools==0.37.8 lxml==5.1.0 MarkupSafe==2.1.3 msgpack==1.0.7 multi_key_dict==2.0.3 munch==4.0.0 netaddr==0.10.1 netifaces==0.11.0 niet==1.4.2 nodeenv==1.8.0 oauth2client==4.1.3 oauthlib==3.2.2 openstacksdk==0.62.0 os-client-config==2.1.0 os-service-types==1.7.0 osc-lib==3.0.0 oslo.config==9.3.0 oslo.context==5.3.0 oslo.i18n==6.2.0 oslo.log==5.4.0 oslo.serialization==5.3.0 oslo.utils==7.0.0 packaging==23.2 pbr==6.0.0 platformdirs==4.1.0 prettytable==3.9.0 pyasn1==0.5.1 pyasn1-modules==0.3.0 pycparser==2.21 pygerrit2==2.0.15 PyGithub==2.1.1 pyinotify==0.9.6 PyJWT==2.8.0 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.8.2 pyrsistent==0.20.0 python-cinderclient==9.4.0 python-dateutil==2.8.2 python-heatclient==3.4.0 python-jenkins==1.8.2 python-keystoneclient==5.3.0 python-magnumclient==4.3.0 python-novaclient==18.4.0 python-openstackclient==6.0.0 python-swiftclient==4.4.0 pytz==2023.3.post1 PyYAML==6.0.1 referencing==0.32.1 requests==2.31.0 requests-oauthlib==1.3.1 requestsexceptions==1.4.0 rfc3986==2.0.0 rpds-py==0.17.1 rsa==4.9 ruamel.yaml==0.18.5 ruamel.yaml.clib==0.2.8 s3transfer==0.10.0 simplejson==3.19.2 six==1.16.0 smmap==5.0.1 soupsieve==2.5 stevedore==5.1.0 tabulate==0.9.0 toml==0.10.2 tomlkit==0.12.3 tqdm==4.66.1 typing_extensions==4.9.0 tzdata==2023.4 urllib3==1.26.18 virtualenv==20.25.0 wcwidth==0.2.13 websocket-client==1.7.0 wrapt==1.16.0 xdg==6.0.0 xmltodict==0.13.0 yq==3.2.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk17 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins4285938914174962631.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "17.0.4" 2022-07-19 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins13632255580289132370.sh + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap + set +u + save_set + RUN_CSIT_SAVE_SET=ehxB + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace + '[' 1 -eq 0 ']' + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts + export ROBOT_VARIABLES= + ROBOT_VARIABLES= + export PROJECT=pap + PROJECT=pap + cd /w/workspace/policy-pap-master-project-csit-pap + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ mktemp -d ++ ROBOT_VENV=/tmp/tmp.u7Z98ns4f8 ++ echo ROBOT_VENV=/tmp/tmp.u7Z98ns4f8 +++ python3 --version ++ echo 'Python version is: Python 3.6.9' Python version is: Python 3.6.9 ++ python3 -m venv --clear /tmp/tmp.u7Z98ns4f8 ++ source /tmp/tmp.u7Z98ns4f8/bin/activate +++ deactivate nondestructive +++ '[' -n '' ']' +++ '[' -n '' ']' +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r +++ '[' -n '' ']' +++ unset VIRTUAL_ENV +++ '[' '!' nondestructive = nondestructive ']' +++ VIRTUAL_ENV=/tmp/tmp.u7Z98ns4f8 +++ export VIRTUAL_ENV +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ PATH=/tmp/tmp.u7Z98ns4f8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin +++ export PATH +++ '[' -n '' ']' +++ '[' -z '' ']' +++ _OLD_VIRTUAL_PS1= +++ '[' 'x(tmp.u7Z98ns4f8) ' '!=' x ']' +++ PS1='(tmp.u7Z98ns4f8) ' +++ export PS1 +++ '[' -n /bin/bash -o -n '' ']' +++ hash -r ++ set -exu ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' ++ echo 'Installing Python Requirements' Installing Python Requirements ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 cryptography==40.0.2 decorator==5.1.1 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 lxml==5.1.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 paramiko==3.4.0 pkg_resources==0.0.0 ply==3.11 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 WebTest==3.0.0 zipp==3.6.0 ++ mkdir -p /tmp/tmp.u7Z98ns4f8/src/onap ++ rm -rf /tmp/tmp.u7Z98ns4f8/src/onap/testsuite ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre ++ echo 'Installing python confluent-kafka library' Installing python confluent-kafka library ++ python3 -m pip install -qq confluent-kafka ++ echo 'Uninstall docker-py and reinstall docker.' Uninstall docker-py and reinstall docker. ++ python3 -m pip uninstall -y -qq docker ++ python3 -m pip install -U -qq docker ++ python3 -m pip -qq freeze bcrypt==4.0.1 beautifulsoup4==4.12.3 bitarray==2.9.2 certifi==2023.11.17 cffi==1.15.1 charset-normalizer==2.0.12 confluent-kafka==2.3.0 cryptography==40.0.2 decorator==5.1.1 deepdiff==5.7.0 dnspython==2.2.1 docker==5.0.3 elasticsearch==7.17.9 elasticsearch-dsl==7.4.1 enum34==1.1.10 future==0.18.3 idna==3.6 importlib-resources==5.4.0 ipaddr==2.2.0 isodate==0.6.1 Jinja2==3.0.3 jmespath==0.10.0 jsonpatch==1.32 jsonpath-rw==1.4.0 jsonpointer==2.3 kafka-python==2.0.2 lxml==5.1.0 MarkupSafe==2.0.1 more-itertools==5.0.0 netaddr==0.8.0 netifaces==0.11.0 odltools==0.1.28 ordered-set==4.0.2 paramiko==3.4.0 pbr==6.0.0 pkg_resources==0.0.0 ply==3.11 protobuf==3.19.6 pyang==2.6.0 pyangbind==0.8.1 pycparser==2.21 pyhocon==0.3.60 PyNaCl==1.5.0 pyparsing==3.1.1 python-dateutil==2.8.2 PyYAML==6.0.1 regex==2023.8.8 requests==2.27.1 robotframework==6.1.1 robotframework-httplibrary==0.4.2 robotframework-onap==0.6.0.dev105 robotframework-pythonlibcore==3.0.0 robotframework-requests==0.9.4 robotframework-selenium2library==3.0.0 robotframework-seleniumlibrary==5.1.3 robotframework-sshlibrary==3.8.0 robotlibcore-temp==1.0.2 scapy==2.5.0 scp==0.14.5 selenium==3.141.0 six==1.16.0 soupsieve==2.3.2.post1 urllib3==1.26.18 waitress==2.0.0 WebOb==1.8.7 websocket-client==1.3.1 WebTest==3.0.0 zipp==3.6.0 ++ uname ++ grep -q Linux ++ sudo apt-get -y -qq install libxml2-utils + load_set + _setopts=ehuxB ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o nounset + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo ehuxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +e + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +u + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + source_safely /tmp/tmp.u7Z98ns4f8/bin/activate + '[' -z /tmp/tmp.u7Z98ns4f8/bin/activate ']' + relax_set + set +e + set +o pipefail + . /tmp/tmp.u7Z98ns4f8/bin/activate ++ deactivate nondestructive ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ unset _OLD_VIRTUAL_PATH ++ '[' -n '' ']' ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r ++ '[' -n '' ']' ++ unset VIRTUAL_ENV ++ '[' '!' nondestructive = nondestructive ']' ++ VIRTUAL_ENV=/tmp/tmp.u7Z98ns4f8 ++ export VIRTUAL_ENV ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ PATH=/tmp/tmp.u7Z98ns4f8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ++ export PATH ++ '[' -n '' ']' ++ '[' -z '' ']' ++ _OLD_VIRTUAL_PS1='(tmp.u7Z98ns4f8) ' ++ '[' 'x(tmp.u7Z98ns4f8) ' '!=' x ']' ++ PS1='(tmp.u7Z98ns4f8) (tmp.u7Z98ns4f8) ' ++ export PS1 ++ '[' -n /bin/bash -o -n '' ']' ++ hash -r + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests + export TEST_OPTIONS= + TEST_OPTIONS= ++ mktemp -d + WORKDIR=/tmp/tmp.86XhMUNijP + cd /tmp/tmp.86XhMUNijP + docker login -u docker -p docker nexus3.onap.org:10001 WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview +++ GERRIT_BRANCH=master +++ echo GERRIT_BRANCH=master GERRIT_BRANCH=master +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose +++ grafana=false +++ gui=false +++ [[ 2 -gt 0 ]] +++ key=apex-pdp +++ case $key in +++ echo apex-pdp apex-pdp +++ component=apex-pdp +++ shift +++ [[ 1 -gt 0 ]] +++ key=--grafana +++ case $key in +++ grafana=true +++ shift +++ [[ 0 -gt 0 ]] +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose +++ echo 'Configuring docker compose...' Configuring docker compose... +++ source export-ports.sh +++ source get-versions.sh +++ '[' -z pap ']' +++ '[' -n apex-pdp ']' +++ '[' apex-pdp == logs ']' +++ '[' true = true ']' +++ echo 'Starting apex-pdp application with Grafana' Starting apex-pdp application with Grafana +++ docker-compose up -d apex-pdp grafana Creating network "compose_default" with the default driver Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... latest: Pulling from prom/prometheus Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... latest: Pulling from grafana/grafana Digest: sha256:6b5b37eb35bbf30e7f64bd7f0fd41c0a5b7637f65d3bf93223b04a192b8bf3e2 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 10.10.2: Pulling from mariadb Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-models-simulator Digest: sha256:09b9abb94ede918d748d5f6ffece2e7592c9941527c37f3d00df286ee158ae05 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1-SNAPSHOT Pulling zookeeper (confluentinc/cp-zookeeper:latest)... latest: Pulling from confluentinc/cp-zookeeper Digest: sha256:000f1d11090f49fa8f67567e633bab4fea5dbd7d9119e7ee2ef259c509063593 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest Pulling kafka (confluentinc/cp-kafka:latest)... latest: Pulling from confluentinc/cp-kafka Digest: sha256:51145a40d23336a11085ca695d02bdeee66fe01b582837c6d223384952226be9 Status: Downloaded newer image for confluentinc/cp-kafka:latest Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-db-migrator Digest: sha256:eb47623eeab9aad8524ecc877b6708ae74b57f9f3cfe77554ad0d1521491cb5d Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.1-SNAPSHOT Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-api Digest: sha256:bbf3044dd101de99d940093be953f041397d02b2f17a70f8da7719c160735c2e Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1-SNAPSHOT Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-pap Digest: sha256:37c4361d99c3f559835790653cd75fd194587e3e5951cbeb5086d1c0b8af6b74 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1-SNAPSHOT Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT)... 3.1.1-SNAPSHOT: Pulling from onap/policy-apex-pdp Digest: sha256:0fdae8f3a73915cdeb896f38ac7d5b74e658832fd10929dcf3fe68219098b89b Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.1-SNAPSHOT Creating simulator ... Creating compose_zookeeper_1 ... Creating prometheus ... Creating mariadb ... Creating mariadb ... done Creating policy-db-migrator ... Creating policy-db-migrator ... done Creating policy-api ... Creating policy-api ... done Creating compose_zookeeper_1 ... done Creating kafka ... Creating simulator ... done Creating prometheus ... done Creating grafana ... Creating grafana ... done Creating kafka ... done Creating policy-pap ... Creating policy-pap ... done Creating policy-apex-pdp ... Creating policy-apex-pdp ... done +++ echo 'Prometheus server: http://localhost:30259' Prometheus server: http://localhost:30259 +++ echo 'Grafana server: http://localhost:30269' Grafana server: http://localhost:30269 +++ cd /w/workspace/policy-pap-master-project-csit-pap ++ sleep 10 ++ unset http_proxy https_proxy ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 Waiting for REST to come up on localhost port 30003... NAMES STATUS policy-apex-pdp Up 10 seconds policy-pap Up 11 seconds grafana Up 13 seconds kafka Up 12 seconds policy-api Up 17 seconds policy-db-migrator Up 18 seconds mariadb Up 19 seconds compose_zookeeper_1 Up 16 seconds prometheus Up 14 seconds simulator Up 15 seconds NAMES STATUS policy-apex-pdp Up 15 seconds policy-pap Up 16 seconds grafana Up 18 seconds kafka Up 17 seconds policy-api Up 23 seconds mariadb Up 24 seconds compose_zookeeper_1 Up 21 seconds prometheus Up 19 seconds simulator Up 20 seconds NAMES STATUS policy-apex-pdp Up 20 seconds policy-pap Up 21 seconds grafana Up 23 seconds kafka Up 22 seconds policy-api Up 28 seconds mariadb Up 29 seconds compose_zookeeper_1 Up 26 seconds prometheus Up 24 seconds simulator Up 25 seconds NAMES STATUS policy-apex-pdp Up 25 seconds policy-pap Up 27 seconds grafana Up 28 seconds kafka Up 27 seconds policy-api Up 33 seconds mariadb Up 34 seconds compose_zookeeper_1 Up 31 seconds prometheus Up 29 seconds simulator Up 30 seconds NAMES STATUS policy-apex-pdp Up 30 seconds policy-pap Up 32 seconds grafana Up 33 seconds kafka Up 33 seconds policy-api Up 38 seconds mariadb Up 39 seconds compose_zookeeper_1 Up 36 seconds prometheus Up 34 seconds simulator Up 35 seconds NAMES STATUS policy-apex-pdp Up 36 seconds policy-pap Up 37 seconds grafana Up 39 seconds kafka Up 38 seconds policy-api Up 43 seconds mariadb Up 45 seconds compose_zookeeper_1 Up 42 seconds prometheus Up 40 seconds simulator Up 41 seconds NAMES STATUS policy-apex-pdp Up 41 seconds policy-pap Up 42 seconds grafana Up 44 seconds kafka Up 43 seconds policy-api Up 48 seconds mariadb Up 50 seconds compose_zookeeper_1 Up 47 seconds prometheus Up 45 seconds simulator Up 46 seconds ++ export 'SUITES=pap-test.robot pap-slas.robot' ++ SUITES='pap-test.robot pap-slas.robot' ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + docker_stats + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:15:09 up 4 min, 0 users, load average: 3.42, 1.66, 0.67 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie %Cpu(s): 12.5 us, 2.6 sy, 0.0 ni, 79.2 id, 5.6 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.7G 22G 1.3M 6.5G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 41 seconds policy-pap Up 42 seconds grafana Up 44 seconds kafka Up 43 seconds policy-api Up 49 seconds mariadb Up 50 seconds compose_zookeeper_1 Up 47 seconds prometheus Up 45 seconds simulator Up 46 seconds + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9ee0a5aa9a61 policy-apex-pdp 12.71% 167.9MiB / 31.41GiB 0.52% 24.4kB / 30.7kB 0B / 0B 47 48e648213310 policy-pap 6.62% 521.9MiB / 31.41GiB 1.62% 52.2kB / 66.5kB 0B / 182MB 59 92be76b711c9 grafana 0.01% 53.91MiB / 31.41GiB 0.17% 19.2kB / 3.44kB 0B / 23.9MB 15 3117adf9b6b0 kafka 42.94% 368.3MiB / 31.41GiB 1.14% 202kB / 187kB 0B / 504kB 81 d86330842446 policy-api 0.12% 497.7MiB / 31.41GiB 1.55% 1MB / 711kB 0B / 0B 56 12bd0464a074 mariadb 0.02% 101.6MiB / 31.41GiB 0.32% 997kB / 1.19MB 11.1MB / 68MB 40 81121e4fe66c compose_zookeeper_1 6.07% 104.3MiB / 31.41GiB 0.32% 126kB / 117kB 0B / 344kB 59 7ea73bce9119 prometheus 0.01% 18.59MiB / 31.41GiB 0.06% 1.56kB / 474B 0B / 0B 13 996e6e0c6551 simulator 0.08% 123.6MiB / 31.41GiB 0.38% 1.27kB / 0B 0B / 0B 76 + echo + cd /tmp/tmp.86XhMUNijP + echo 'Reading the testplan:' Reading the testplan: + echo 'pap-test.robot pap-slas.robot' + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' + cat testplan.txt /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ++ xargs + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... + relax_set + set +e + set +o pipefail + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ============================================================================== pap ============================================================================== pap.Pap-Test ============================================================================== LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | ------------------------------------------------------------------------------ LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | ------------------------------------------------------------------------------ LoadNodeTemplates :: Create node templates in database using speci... | PASS | ------------------------------------------------------------------------------ Healthcheck :: Verify policy pap health check | PASS | ------------------------------------------------------------------------------ Consolidated Healthcheck :: Verify policy consolidated health check | PASS | ------------------------------------------------------------------------------ Metrics :: Verify policy pap is exporting prometheus metrics | PASS | ------------------------------------------------------------------------------ AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | ------------------------------------------------------------------------------ ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | ------------------------------------------------------------------------------ DeployPdpGroups :: Deploy policies in PdpGroups | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | ------------------------------------------------------------------------------ UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | ------------------------------------------------------------------------------ UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | ------------------------------------------------------------------------------ QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | ------------------------------------------------------------------------------ DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | ------------------------------------------------------------------------------ DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | ------------------------------------------------------------------------------ QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | ------------------------------------------------------------------------------ pap.Pap-Test | PASS | 22 tests, 22 passed, 0 failed ============================================================================== pap.Pap-Slas ============================================================================== WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | ------------------------------------------------------------------------------ ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | ------------------------------------------------------------------------------ ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | ------------------------------------------------------------------------------ pap.Pap-Slas | PASS | 8 tests, 8 passed, 0 failed ============================================================================== pap | PASS | 30 tests, 30 passed, 0 failed ============================================================================== Output: /tmp/tmp.86XhMUNijP/output.xml Log: /tmp/tmp.86XhMUNijP/log.html Report: /tmp/tmp.86XhMUNijP/report.html + RESULT=0 + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + echo 'RESULT: 0' RESULT: 0 + exit 0 + on_exit + rc=0 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes mariadb Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes simulator Up 2 minutes + docker_stats ++ uname -s + '[' Linux == Darwin ']' + sh -c 'top -bn1 | head -3' top - 23:16:59 up 6 min, 0 users, load average: 0.69, 1.23, 0.63 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie %Cpu(s): 10.3 us, 2.0 sy, 0.0 ni, 83.1 id, 4.5 wa, 0.0 hi, 0.1 si, 0.1 st + echo + sh -c 'free -h' total used free shared buff/cache available Mem: 31G 2.8G 22G 1.3M 6.5G 28G Swap: 1.0G 0B 1.0G + echo + docker ps --format 'table {{ .Names }}\t{{ .Status }}' NAMES STATUS policy-apex-pdp Up 2 minutes policy-pap Up 2 minutes grafana Up 2 minutes kafka Up 2 minutes policy-api Up 2 minutes mariadb Up 2 minutes compose_zookeeper_1 Up 2 minutes prometheus Up 2 minutes simulator Up 2 minutes + echo + docker stats --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 9ee0a5aa9a61 policy-apex-pdp 0.55% 185.9MiB / 31.41GiB 0.58% 79.4kB / 124kB 0B / 0B 50 48e648213310 policy-pap 0.33% 515.9MiB / 31.41GiB 1.60% 2.36MB / 856kB 0B / 182MB 63 92be76b711c9 grafana 0.01% 55.21MiB / 31.41GiB 0.17% 20.2kB / 4.53kB 0B / 23.9MB 15 3117adf9b6b0 kafka 2.65% 392.4MiB / 31.41GiB 1.22% 383kB / 338kB 0B / 602kB 83 d86330842446 policy-api 0.10% 561MiB / 31.41GiB 1.74% 2.49MB / 1.27MB 0B / 0B 56 12bd0464a074 mariadb 0.01% 102.9MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11.1MB / 68.4MB 28 81121e4fe66c compose_zookeeper_1 0.11% 105.6MiB / 31.41GiB 0.33% 134kB / 123kB 0B / 344kB 59 7ea73bce9119 prometheus 0.25% 24.98MiB / 31.41GiB 0.08% 191kB / 11.1kB 0B / 0B 13 996e6e0c6551 simulator 0.07% 122.5MiB / 31.41GiB 0.38% 1.5kB / 0B 0B / 0B 76 + echo + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' + relax_set + set +e + set +o pipefail + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ++ echo 'Shut down started!' Shut down started! ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose ++ source export-ports.sh ++ source get-versions.sh ++ echo 'Collecting logs from docker compose containers...' Collecting logs from docker compose containers... ++ docker-compose logs ++ cat docker_compose.log Attaching to policy-apex-pdp, policy-pap, grafana, kafka, policy-api, policy-db-migrator, mariadb, compose_zookeeper_1, prometheus, simulator zookeeper_1 | ===> User zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) zookeeper_1 | ===> Configuring ... zookeeper_1 | ===> Running preflight checks ... zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... zookeeper_1 | ===> Launching ... zookeeper_1 | ===> Launching zookeeper ... zookeeper_1 | [2024-01-17 23:14:25,305] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,318] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,318] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,318] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,318] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,321] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-17 23:14:25,321] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-17 23:14:25,321] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) zookeeper_1 | [2024-01-17 23:14:25,321] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) zookeeper_1 | [2024-01-17 23:14:25,322] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) zookeeper_1 | [2024-01-17 23:14:25,323] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,323] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,324] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,324] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,324] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) zookeeper_1 | [2024-01-17 23:14:25,324] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) zookeeper_1 | [2024-01-17 23:14:25,335] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@5fa07e12 (org.apache.zookeeper.server.ServerMetrics) zookeeper_1 | [2024-01-17 23:14:25,338] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-17 23:14:25,338] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) zookeeper_1 | [2024-01-17 23:14:25,340] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-17 23:14:25,350] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,350] INFO (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,351] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,351] INFO Server environment:host.name=81121e4fe66c (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,351] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,351] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,351] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,351] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,352] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,353] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) zookeeper_1 | [2024-01-17 23:14:25,355] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,355] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,357] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-17 23:14:25,357] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) zookeeper_1 | [2024-01-17 23:14:25,358] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-17 23:14:25,358] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-17 23:14:25,358] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-17 23:14:25,358] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-17 23:14:25,358] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-17 23:14:25,358] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) zookeeper_1 | [2024-01-17 23:14:25,360] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,360] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,361] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-17 23:14:25,361] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) zookeeper_1 | [2024-01-17 23:14:25,361] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,383] INFO Logging initialized @518ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) zookeeper_1 | [2024-01-17 23:14:25,488] WARN o.e.j.s.ServletContextHandler@45385f75{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-17 23:14:25,488] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-17 23:14:25,506] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-17 23:14:25,532] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-17 23:14:25,532] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-17 23:14:25,533] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) zookeeper_1 | [2024-01-17 23:14:25,539] WARN ServletContext@o.e.j.s.ServletContextHandler@45385f75{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) zookeeper_1 | [2024-01-17 23:14:25,548] INFO Started o.e.j.s.ServletContextHandler@45385f75{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) zookeeper_1 | [2024-01-17 23:14:25,562] INFO Started ServerConnector@304bb45b{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) zookeeper_1 | [2024-01-17 23:14:25,562] INFO Started @698ms (org.eclipse.jetty.server.Server) zookeeper_1 | [2024-01-17 23:14:25,562] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) zookeeper_1 | [2024-01-17 23:14:25,570] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-17 23:14:25,571] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) zookeeper_1 | [2024-01-17 23:14:25,573] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-17 23:14:25,574] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) zookeeper_1 | [2024-01-17 23:14:25,596] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-17 23:14:25,596] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) zookeeper_1 | [2024-01-17 23:14:25,597] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-17 23:14:25,597] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-17 23:14:25,602] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) zookeeper_1 | [2024-01-17 23:14:25,602] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-17 23:14:25,605] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) zookeeper_1 | [2024-01-17 23:14:25,606] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) zookeeper_1 | [2024-01-17 23:14:25,606] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) zookeeper_1 | [2024-01-17 23:14:25,616] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) zookeeper_1 | [2024-01-17 23:14:25,616] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) zookeeper_1 | [2024-01-17 23:14:25,629] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) zookeeper_1 | [2024-01-17 23:14:25,629] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) zookeeper_1 | [2024-01-17 23:14:30,382] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) kafka | ===> User kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) kafka | ===> Configuring ... kafka | Running in Zookeeper mode... kafka | ===> Running preflight checks ... kafka | ===> Check if /var/lib/kafka/data is writable ... kafka | ===> Check if Zookeeper is healthy ... kafka | [2024-01-17 23:14:30,303] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,303] INFO Client environment:host.name=3117adf9b6b0 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,303] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) policy-api | Waiting for mariadb port 3306... policy-api | mariadb (172.17.0.2:3306) open policy-api | Waiting for policy-db-migrator port 6824... policy-api | policy-db-migrator (172.17.0.6:6824) open policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml policy-api | policy-api | . ____ _ __ _ _ policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / policy-api | =========|_|==============|___/=/_/_/_/ policy-api | :: Spring Boot :: (v3.1.4) policy-api | policy-api | [2024-01-17T23:14:40.159+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.9 with PID 28 (/app/api.jar started by policy in /opt/app/policy/api/bin) policy-api | [2024-01-17T23:14:40.160+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" policy-api | [2024-01-17T23:14:42.021+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. policy-api | [2024-01-17T23:14:42.108+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 76 ms. Found 6 JPA repository interfaces. policy-api | [2024-01-17T23:14:42.520+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-17T23:14:42.521+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler policy-api | [2024-01-17T23:14:43.202+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) policy-api | [2024-01-17T23:14:43.211+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] policy-api | [2024-01-17T23:14:43.213+00:00|INFO|StandardService|main] Starting service [Tomcat] policy-api | [2024-01-17T23:14:43.213+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] policy-api | [2024-01-17T23:14:43.310+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext policy-api | [2024-01-17T23:14:43.310+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3088 ms policy-api | [2024-01-17T23:14:43.867+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] policy-api | [2024-01-17T23:14:43.943+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 policy-api | [2024-01-17T23:14:43.946+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer policy-api | [2024-01-17T23:14:43.992+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled policy-api | [2024-01-17T23:14:44.432+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer policy-api | [2024-01-17T23:14:44.459+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... policy-api | [2024-01-17T23:14:44.551+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 policy-api | [2024-01-17T23:14:44.553+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. policy-api | [2024-01-17T23:14:44.595+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) policy-api | [2024-01-17T23:14:44.597+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead policy-api | [2024-01-17T23:14:46.784+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) policy-api | [2024-01-17T23:14:46.787+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' policy-api | [2024-01-17T23:14:47.990+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml policy-api | [2024-01-17T23:14:48.830+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] policy-api | [2024-01-17T23:14:49.949+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-api | [2024-01-17T23:14:50.184+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@6149184e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6f3a8d5e, org.springframework.security.web.context.SecurityContextHolderFilter@39d666e0, org.springframework.security.web.header.HeaderWriterFilter@5f160f9c, org.springframework.security.web.authentication.logout.LogoutFilter@79462469, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@407bfc49, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@56bc8c45, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2f29400e, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@680f7a5e, org.springframework.security.web.access.ExceptionTranslationFilter@9bc10bd, org.springframework.security.web.access.intercept.AuthorizationFilter@1e33203f] policy-api | [2024-01-17T23:14:51.022+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-api | [2024-01-17T23:14:51.078+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | [2024-01-17 23:14:30,303] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,303] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,303] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/kafka-metadata-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/kafka_2.13-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/kafka-server-common-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/kafka-raft-7.5.3-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.5.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.5.3.jar:/usr/share/java/cp-base-new/kafka-storage-7.5.3-ccs.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.5.3-ccs.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.5.3-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.5.3.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.5.3-ccs.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,304] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,307] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@62bd765 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,310] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-17 23:14:30,313] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-17 23:14:30,320] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:30,334] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:30,334] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:30,341] INFO Socket connection established, initiating session, client: /172.17.0.8:40648, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:30,625] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003dbd20000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:30,753] INFO Session: 0x1000003dbd20000 closed (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:30,753] INFO EventThread shut down for session: 0x1000003dbd20000 (org.apache.zookeeper.ClientCnxn) kafka | Using log4j config /etc/kafka/log4j.properties kafka | ===> Launching ... kafka | ===> Launching kafka ... kafka | [2024-01-17 23:14:31,394] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) kafka | [2024-01-17 23:14:31,681] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) kafka | [2024-01-17 23:14:31,745] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2024-01-17 23:14:31,746] INFO starting (kafka.server.KafkaServer) kafka | [2024-01-17 23:14:31,747] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2024-01-17 23:14:31,764] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-17 23:14:31,769] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,769] INFO Client environment:host.name=3117adf9b6b0 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,769] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,769] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,769] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/kafka-metadata-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/connect-runtime-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/connect-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-library-2.13.10.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/reflections-0.9.12.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/trogdor-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka-raft-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/kafka-storage-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/kafka-streams-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/kafka-tools-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/kafka-shell-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/connect-mirror-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-json-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.10.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/connect-transforms-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.5.3-ccs.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,770] INFO Client environment:os.memory.free=1009MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,771] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,771] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,828] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@68be8808 (org.apache.zookeeper.ZooKeeper) kafka | [2024-01-17 23:14:31,832] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2024-01-17 23:14:31,837] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:31,839] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) policy-api | [2024-01-17T23:14:51.099+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' policy-api | [2024-01-17T23:14:51.118+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.705 seconds (process running for 12.287) policy-api | [2024-01-17T23:15:12.805+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-api | [2024-01-17T23:15:12.805+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-api | [2024-01-17T23:15:12.806+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms policy-api | [2024-01-17T23:15:13.067+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: policy-api | [] kafka | [2024-01-17 23:14:31,841] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:31,845] INFO Socket connection established, initiating session, client: /172.17.0.8:40650, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:31,994] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003dbd20001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2024-01-17 23:14:32,000] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2024-01-17 23:14:33,032] INFO Cluster ID = TCpMGCYeSECduTbHgcA3wg (kafka.server.KafkaServer) kafka | [2024-01-17 23:14:33,036] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) kafka | [2024-01-17 23:14:33,082] INFO KafkaConfig values: kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.include.jmx.reporter = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.heartbeat.interval.ms = 2000 kafka | broker.id = 1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | broker.session.timeout.ms = 9000 kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.listener.names = null kafka | controller.quorum.append.linger.ms = 25 kafka | controller.quorum.election.backoff.max.ms = 1000 kafka | controller.quorum.election.timeout.ms = 1000 kafka | controller.quorum.fetch.timeout.ms = 2000 kafka | controller.quorum.request.timeout.ms = 2000 kafka | controller.quorum.retry.backoff.ms = 20 kafka | controller.quorum.voters = [] kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delegation.token.secret.key = null kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | early.start.listeners = null kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.consumer.assignors = [] kafka | group.consumer.heartbeat.interval.ms = 5000 kafka | group.consumer.max.heartbeat.interval.ms = 15000 kafka | group.consumer.max.session.timeout.ms = 60000 kafka | group.consumer.max.size = 2147483647 kafka | group.consumer.min.heartbeat.interval.ms = 5000 kafka | group.consumer.min.session.timeout.ms = 45000 kafka | group.consumer.session.timeout.ms = 45000 kafka | group.coordinator.new.enable = false kafka | group.coordinator.threads = 1 kafka | group.initial.rebalance.delay.ms = 3000 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | initial.broker.registration.timeout.ms = 60000 kafka | inter.broker.listener.name = PLAINTEXT kafka | inter.broker.protocol.version = 3.5-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /var/lib/kafka/data kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 3.0-IV1 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 policy-apex-pdp | Waiting for mariadb port 3306... policy-apex-pdp | mariadb (172.17.0.2:3306) open policy-apex-pdp | Waiting for kafka port 9092... policy-apex-pdp | kafka (172.17.0.8:9092) open policy-apex-pdp | Waiting for pap port 6969... policy-apex-pdp | pap (172.17.0.10:6969) open policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' policy-apex-pdp | [2024-01-17T23:15:05.962+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] policy-apex-pdp | [2024-01-17T23:15:06.118+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-4041dc88-5007-445a-911f-3e52b8d238d9-1 policy-apex-pdp | client.rack = policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 4041dc88-5007-445a-911f-3e52b8d238d9 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | overriding logback.xml simulator | 2024-01-17 23:14:23,843 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json simulator | 2024-01-17 23:14:24,010 INFO org.onap.policy.models.simulators starting simulator | 2024-01-17 23:14:24,011 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties simulator | 2024-01-17 23:14:24,302 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION simulator | 2024-01-17 23:14:24,303 INFO org.onap.policy.models.simulators starting A&AI simulator simulator | 2024-01-17 23:14:24,412 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-17 23:14:24,422 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:24,425 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,STOPPED}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:24,428 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-17 23:14:24,485 INFO Session workerName=node0 simulator | 2024-01-17 23:14:24,977 INFO Using GSON for REST calls simulator | 2024-01-17 23:14:25,045 INFO Started o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE} simulator | 2024-01-17 23:14:25,055 INFO Started A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} simulator | 2024-01-17 23:14:25,061 INFO Started Server@16746061{STARTING}[11.0.18,sto=0] @1718ms simulator | 2024-01-17 23:14:25,061 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@16746061{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@57fd91c9{/,null,AVAILABLE}, connector=A&AI simulator@53dacd14{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1a7288a3==org.glassfish.jersey.servlet.ServletContainer@27060b2b{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4364 ms. simulator | 2024-01-17 23:14:25,067 INFO org.onap.policy.models.simulators starting SDNC simulator simulator | 2024-01-17 23:14:25,076 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-17 23:14:25,076 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:25,084 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,STOPPED}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:25,085 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-17 23:14:25,106 INFO Session workerName=node0 simulator | 2024-01-17 23:14:25,211 INFO Using GSON for REST calls simulator | 2024-01-17 23:14:25,221 INFO Started o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE} simulator | 2024-01-17 23:14:25,227 INFO Started SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} simulator | 2024-01-17 23:14:25,228 INFO Started Server@75459c75{STARTING}[11.0.18,sto=0] @1884ms grafana | logger=settings t=2024-01-17T23:14:25.306721651Z level=info msg="Starting Grafana" version=10.2.3 commit=1e84fede543acc892d2a2515187e545eb047f237 branch=HEAD compiled=2023-12-18T15:46:07Z grafana | logger=settings t=2024-01-17T23:14:25.306941895Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini grafana | logger=settings t=2024-01-17T23:14:25.306952915Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini grafana | logger=settings t=2024-01-17T23:14:25.306956965Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" grafana | logger=settings t=2024-01-17T23:14:25.306960325Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" grafana | logger=settings t=2024-01-17T23:14:25.306963485Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-17T23:14:25.306966525Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-17T23:14:25.306970715Z level=info msg="Config overridden from command line" arg="default.log.mode=console" grafana | logger=settings t=2024-01-17T23:14:25.306973805Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" grafana | logger=settings t=2024-01-17T23:14:25.306976665Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" grafana | logger=settings t=2024-01-17T23:14:25.306979405Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" grafana | logger=settings t=2024-01-17T23:14:25.306983015Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" grafana | logger=settings t=2024-01-17T23:14:25.306985965Z level=info msg=Target target=[all] grafana | logger=settings t=2024-01-17T23:14:25.306990865Z level=info msg="Path Home" path=/usr/share/grafana grafana | logger=settings t=2024-01-17T23:14:25.306993785Z level=info msg="Path Data" path=/var/lib/grafana grafana | logger=settings t=2024-01-17T23:14:25.306997705Z level=info msg="Path Logs" path=/var/log/grafana grafana | logger=settings t=2024-01-17T23:14:25.307000395Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins grafana | logger=settings t=2024-01-17T23:14:25.307003165Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning grafana | logger=settings t=2024-01-17T23:14:25.307006315Z level=info msg="App mode production" grafana | logger=sqlstore t=2024-01-17T23:14:25.307294061Z level=info msg="Connecting to DB" dbtype=sqlite3 grafana | logger=sqlstore t=2024-01-17T23:14:25.307311931Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db grafana | logger=migrator t=2024-01-17T23:14:25.307867429Z level=info msg="Starting DB migrations" grafana | logger=migrator t=2024-01-17T23:14:25.308873143Z level=info msg="Executing migration" id="create migration_log table" grafana | logger=migrator t=2024-01-17T23:14:25.309724846Z level=info msg="Migration successfully executed" id="create migration_log table" duration=831.502µs grafana | logger=migrator t=2024-01-17T23:14:25.314551327Z level=info msg="Executing migration" id="create user table" grafana | logger=migrator t=2024-01-17T23:14:25.315199417Z level=info msg="Migration successfully executed" id="create user table" duration=647.91µs grafana | logger=migrator t=2024-01-17T23:14:25.318685319Z level=info msg="Executing migration" id="add unique index user.login" grafana | logger=migrator t=2024-01-17T23:14:25.319262287Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=578.688µs grafana | logger=migrator t=2024-01-17T23:14:25.322218001Z level=info msg="Executing migration" id="add unique index user.email" grafana | logger=migrator t=2024-01-17T23:14:25.322728948Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=509.197µs grafana | logger=migrator t=2024-01-17T23:14:25.327803253Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" grafana | logger=migrator t=2024-01-17T23:14:25.328354581Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=551.578µs grafana | logger=migrator t=2024-01-17T23:14:25.356644839Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" grafana | logger=migrator t=2024-01-17T23:14:25.357562994Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=920.695µs grafana | logger=migrator t=2024-01-17T23:14:25.362568297Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" grafana | logger=migrator t=2024-01-17T23:14:25.364573067Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.00456ms grafana | logger=migrator t=2024-01-17T23:14:25.3702218Z level=info msg="Executing migration" id="create user table v2" grafana | logger=migrator t=2024-01-17T23:14:25.370779558Z level=info msg="Migration successfully executed" id="create user table v2" duration=558.418µs grafana | logger=migrator t=2024-01-17T23:14:25.374212479Z level=info msg="Executing migration" id="create index UQE_user_login - v2" grafana | logger=migrator t=2024-01-17T23:14:25.374786048Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=573.389µs grafana | logger=migrator t=2024-01-17T23:14:25.378663985Z level=info msg="Executing migration" id="create index UQE_user_email - v2" grafana | logger=migrator t=2024-01-17T23:14:25.379227363Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=563.338µs grafana | logger=migrator t=2024-01-17T23:14:25.384948408Z level=info msg="Executing migration" id="copy data_source v1 to v2" grafana | logger=migrator t=2024-01-17T23:14:25.385227592Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=279.324µs grafana | logger=migrator t=2024-01-17T23:14:25.387900801Z level=info msg="Executing migration" id="Drop old table user_v1" simulator | 2024-01-17 23:14:25,228 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@75459c75{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@183e8023{/,null,AVAILABLE}, connector=SDNC simulator@63b1d4fa{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-201b6b6f==org.glassfish.jersey.servlet.ServletContainer@673ce4f9{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4856 ms. simulator | 2024-01-17 23:14:25,229 INFO org.onap.policy.models.simulators starting SO simulator simulator | 2024-01-17 23:14:25,231 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-17 23:14:25,232 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:25,233 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,STOPPED}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:25,234 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-17 23:14:25,246 INFO Session workerName=node0 simulator | 2024-01-17 23:14:25,327 INFO Using GSON for REST calls simulator | 2024-01-17 23:14:25,339 INFO Started o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE} simulator | 2024-01-17 23:14:25,340 INFO Started SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} simulator | 2024-01-17 23:14:25,341 INFO Started Server@30bcf3c1{STARTING}[11.0.18,sto=0] @1997ms simulator | 2024-01-17 23:14:25,341 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@30bcf3c1{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@2a3c96e3{/,null,AVAILABLE}, connector=SO simulator@3e5499cc{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-b78a709==org.glassfish.jersey.servlet.ServletContainer@1399f374{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4892 ms. simulator | 2024-01-17 23:14:25,342 INFO org.onap.policy.models.simulators starting VFC simulator simulator | 2024-01-17 23:14:25,344 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START simulator | 2024-01-17 23:14:25,344 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:25,344 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,STOPPED}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING simulator | 2024-01-17 23:14:25,345 INFO jetty-11.0.18; built: 2023-10-27T02:14:36.036Z; git: 5a9a771a9fbcb9d36993630850f612581b78c13f; jvm 17.0.9+8-alpine-r0 simulator | 2024-01-17 23:14:25,353 INFO Session workerName=node0 simulator | 2024-01-17 23:14:25,412 INFO Using GSON for REST calls simulator | 2024-01-17 23:14:25,426 INFO Started o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE} simulator | 2024-01-17 23:14:25,428 INFO Started VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} simulator | 2024-01-17 23:14:25,430 INFO Started Server@a776e{STARTING}[11.0.18,sto=0] @2086ms simulator | 2024-01-17 23:14:25,430 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@a776e{STARTED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@792bbc74{/,null,AVAILABLE}, connector=VFC simulator@5b444398{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-42f48531==org.glassfish.jersey.servlet.ServletContainer@f8b49435{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4914 ms. simulator | 2024-01-17 23:14:25,431 INFO org.onap.policy.models.simulators started kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metadata.log.dir = null kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 kafka | metadata.log.max.snapshot.interval.ms = 3600000 kafka | metadata.log.segment.bytes = 1073741824 kafka | metadata.log.segment.min.bytes = 8388608 kafka | metadata.log.segment.ms = 604800000 kafka | metadata.max.idle.interval.ms = 500 kafka | metadata.max.retention.bytes = 104857600 kafka | metadata.max.retention.ms = 604800000 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | node.id = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 grafana | logger=migrator t=2024-01-17T23:14:25.38840938Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=508.549µs grafana | logger=migrator t=2024-01-17T23:14:25.390666282Z level=info msg="Executing migration" id="Add column help_flags1 to user table" grafana | logger=migrator t=2024-01-17T23:14:25.391748559Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.082257ms grafana | logger=migrator t=2024-01-17T23:14:25.394688242Z level=info msg="Executing migration" id="Update user table charset" grafana | logger=migrator t=2024-01-17T23:14:25.394716613Z level=info msg="Migration successfully executed" id="Update user table charset" duration=29.571µs grafana | logger=migrator t=2024-01-17T23:14:25.398734612Z level=info msg="Executing migration" id="Add last_seen_at column to user" grafana | logger=migrator t=2024-01-17T23:14:25.399524184Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=791.342µs grafana | logger=migrator t=2024-01-17T23:14:25.401936019Z level=info msg="Executing migration" id="Add missing user data" grafana | logger=migrator t=2024-01-17T23:14:25.402153902Z level=info msg="Migration successfully executed" id="Add missing user data" duration=217.703µs grafana | logger=migrator t=2024-01-17T23:14:25.405004434Z level=info msg="Executing migration" id="Add is_disabled column to user" grafana | logger=migrator t=2024-01-17T23:14:25.405791736Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=786.872µs policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-17T23:15:06.261+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-17T23:15:06.261+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-17T23:15:06.261+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533306259 policy-apex-pdp | [2024-01-17T23:15:06.263+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-1, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-17T23:15:06.276+00:00|INFO|ServiceManager|main] service manager starting policy-apex-pdp | [2024-01-17T23:15:06.276+00:00|INFO|ServiceManager|main] service manager starting topics policy-apex-pdp | [2024-01-17T23:15:06.281+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4041dc88-5007-445a-911f-3e52b8d238d9, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting policy-apex-pdp | [2024-01-17T23:15:06.313+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | allow.auto.create.topics = true policy-apex-pdp | auto.commit.interval.ms = 5000 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | auto.offset.reset = latest prometheus | ts=2024-01-17T23:14:24.211Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d prometheus | ts=2024-01-17T23:14:24.211Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" prometheus | ts=2024-01-17T23:14:24.211Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" prometheus | ts=2024-01-17T23:14:24.211Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" prometheus | ts=2024-01-17T23:14:24.211Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" prometheus | ts=2024-01-17T23:14:24.211Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" prometheus | ts=2024-01-17T23:14:24.217Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 prometheus | ts=2024-01-17T23:14:24.218Z caller=main.go:1039 level=info msg="Starting TSDB ..." prometheus | ts=2024-01-17T23:14:24.220Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 prometheus | ts=2024-01-17T23:14:24.220Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 prometheus | ts=2024-01-17T23:14:24.221Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" prometheus | ts=2024-01-17T23:14:24.221Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.86µs prometheus | ts=2024-01-17T23:14:24.221Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" prometheus | ts=2024-01-17T23:14:24.221Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 prometheus | ts=2024-01-17T23:14:24.221Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=77.441µs wal_replay_duration=271.874µs wbl_replay_duration=170ns total_replay_duration=372.166µs prometheus | ts=2024-01-17T23:14:24.241Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC prometheus | ts=2024-01-17T23:14:24.241Z caller=main.go:1063 level=info msg="TSDB started" prometheus | ts=2024-01-17T23:14:24.241Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml prometheus | ts=2024-01-17T23:14:24.243Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.538626ms db_storage=1.72µs remote_storage=1.96µs web_handler=400ns query_engine=1.58µs scrape=394.456µs scrape_sd=187.854µs notify=44.19µs notify_sd=20.651µs rules=2.77µs tracing=6.98µs prometheus | ts=2024-01-17T23:14:24.243Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." prometheus | ts=2024-01-17T23:14:24.243Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | check.crcs = true policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2 policy-apex-pdp | client.rack = mariadb | 2024-01-17 23:14:18+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-17 23:14:19+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' mariadb | 2024-01-17 23:14:19+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. mariadb | 2024-01-17 23:14:19+00:00 [Note] [Entrypoint]: Initializing database files mariadb | 2024-01-17 23:14:19 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-17 23:14:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-17 23:14:19 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | mariadb | mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! mariadb | To do so, start the server, then issue the following command: mariadb | mariadb | '/usr/bin/mysql_secure_installation' mariadb | mariadb | which will also give you the option of removing the test mariadb | databases and anonymous user created by default. This is mariadb | strongly recommended for production servers. mariadb | mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb mariadb | mariadb | Please report any problems at https://mariadb.org/jira mariadb | mariadb | The latest information about MariaDB is available at https://mariadb.org/. mariadb | mariadb | Consider joining MariaDB's strong and vibrant community: mariadb | https://mariadb.org/get-involved/ mariadb | mariadb | 2024-01-17 23:14:20+00:00 [Note] [Entrypoint]: Database files initialized mariadb | 2024-01-17 23:14:20+00:00 [Note] [Entrypoint]: Starting temporary server mariadb | 2024-01-17 23:14:20+00:00 [Note] [Entrypoint]: Waiting for server startup mariadb | 2024-01-17 23:14:20 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... mariadb | 2024-01-17 23:14:20 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 mariadb | 2024-01-17 23:14:20 0 [Note] InnoDB: Number of transaction pools: 1 mariadb | 2024-01-17 23:14:20 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions mariadb | 2024-01-17 23:14:20 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) mariadb | 2024-01-17 23:14:20 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) mariadb | 2024-01-17 23:14:20 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF mariadb | 2024-01-17 23:14:20 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB mariadb | 2024-01-17 23:14:20 0 [Note] InnoDB: Completed initialization of buffer pool mariadb | 2024-01-17 23:14:21 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) mariadb | 2024-01-17 23:14:21 0 [Note] InnoDB: 128 rollback segments are active. mariadb | 2024-01-17 23:14:21 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... mariadb | 2024-01-17 23:14:21 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. mariadb | 2024-01-17 23:14:21 0 [Note] InnoDB: log sequence number 46590; transaction id 14 mariadb | 2024-01-17 23:14:21 0 [Note] Plugin 'FEEDBACK' is disabled. mariadb | 2024-01-17 23:14:21 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. mariadb | 2024-01-17 23:14:21 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-17 23:14:21 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. mariadb | 2024-01-17 23:14:21 0 [Note] mariadbd: ready for connections. mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution mariadb | 2024-01-17 23:14:21+00:00 [Note] [Entrypoint]: Temporary server started. mariadb | 2024-01-17 23:14:24+00:00 [Note] [Entrypoint]: Creating user policy_user mariadb | 2024-01-17 23:14:24+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) mariadb | mariadb | 2024-01-17 23:14:24+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf mariadb | mariadb | 2024-01-17 23:14:24+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh mariadb | #!/bin/bash -xv mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. mariadb | # policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | default.api.timeout.ms = 60000 policy-apex-pdp | enable.auto.commit = true policy-apex-pdp | exclude.internal.topics = true policy-apex-pdp | fetch.max.bytes = 52428800 policy-apex-pdp | fetch.max.wait.ms = 500 policy-apex-pdp | fetch.min.bytes = 1 policy-apex-pdp | group.id = 4041dc88-5007-445a-911f-3e52b8d238d9 policy-apex-pdp | group.instance.id = null policy-apex-pdp | heartbeat.interval.ms = 3000 policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | internal.leave.group.on.close = true policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false policy-apex-pdp | isolation.level = read_uncommitted policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | max.partition.fetch.bytes = 1048576 policy-apex-pdp | max.poll.interval.ms = 300000 policy-apex-pdp | max.poll.records = 500 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-apex-pdp | receive.buffer.bytes = 65536 policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-apex-pdp | reconnect.backoff.ms = 50 policy-apex-pdp | request.timeout.ms = 30000 policy-apex-pdp | retry.backoff.ms = 100 policy-apex-pdp | sasl.client.callback.handler.class = null policy-apex-pdp | sasl.jaas.config = null policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-apex-pdp | sasl.kerberos.service.name = null policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-apex-pdp | sasl.login.callback.handler.class = null policy-apex-pdp | sasl.login.class = null policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-apex-pdp | sasl.login.read.timeout.ms = null policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); mariadb | # you may not use this file except in compliance with the License. mariadb | # You may obtain a copy of the License at mariadb | # mariadb | # http://www.apache.org/licenses/LICENSE-2.0 mariadb | # mariadb | # Unless required by applicable law or agreed to in writing, software mariadb | # distributed under the License is distributed on an "AS IS" BASIS, mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. mariadb | # See the License for the specific language governing permissions and mariadb | # limitations under the License. mariadb | mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | do mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" mariadb | done mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' mariadb | policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-apex-pdp | sasl.mechanism = GSSAPI policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-apex-pdp | security.protocol = PLAINTEXT policy-apex-pdp | security.providers = null policy-apex-pdp | send.buffer.bytes = 131072 policy-apex-pdp | session.timeout.ms = 45000 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | ssl.cipher.suites = null policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | ssl.endpoint.identification.algorithm = https policy-apex-pdp | ssl.engine.factory.class = null policy-apex-pdp | ssl.key.password = null policy-apex-pdp | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | ssl.keystore.certificate.chain = null policy-apex-pdp | ssl.keystore.key = null policy-apex-pdp | ssl.keystore.location = null policy-apex-pdp | ssl.keystore.password = null policy-apex-pdp | ssl.keystore.type = JKS policy-apex-pdp | ssl.protocol = TLSv1.3 policy-apex-pdp | ssl.provider = null policy-apex-pdp | ssl.secure.random.implementation = null policy-apex-pdp | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | ssl.truststore.certificates = null policy-apex-pdp | ssl.truststore.location = null policy-apex-pdp | ssl.truststore.password = null policy-apex-pdp | ssl.truststore.type = JKS policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | policy-apex-pdp | [2024-01-17T23:15:06.321+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-17T23:15:06.321+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-17T23:15:06.321+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533306321 policy-apex-pdp | [2024-01-17T23:15:06.322+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-17T23:15:06.322+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=139ed967-c71b-4bf1-928d-911e3abc9ce9, alive=false, publisher=null]]: starting policy-apex-pdp | [2024-01-17T23:15:06.347+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-apex-pdp | acks = -1 policy-apex-pdp | auto.include.jmx.reporter = true policy-apex-pdp | batch.size = 16384 policy-apex-pdp | bootstrap.servers = [kafka:9092] policy-apex-pdp | buffer.memory = 33554432 policy-apex-pdp | client.dns.lookup = use_all_dns_ips policy-apex-pdp | client.id = producer-1 policy-apex-pdp | compression.type = none policy-apex-pdp | connections.max.idle.ms = 540000 policy-apex-pdp | delivery.timeout.ms = 120000 policy-apex-pdp | enable.idempotence = true policy-apex-pdp | interceptor.classes = [] policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer policy-apex-pdp | linger.ms = 0 policy-apex-pdp | max.block.ms = 60000 policy-apex-pdp | max.in.flight.requests.per.connection = 5 policy-apex-pdp | max.request.size = 1048576 policy-apex-pdp | metadata.max.age.ms = 300000 policy-apex-pdp | metadata.max.idle.ms = 300000 policy-apex-pdp | metric.reporters = [] policy-apex-pdp | metrics.num.samples = 2 policy-apex-pdp | metrics.recording.level = INFO policy-apex-pdp | metrics.sample.window.ms = 30000 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true policy-apex-pdp | partitioner.availability.timeout.ms = 0 policy-apex-pdp | partitioner.class = null mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' grafana | logger=migrator t=2024-01-17T23:14:25.409390409Z level=info msg="Executing migration" id="Add index user.login/user.email" policy-apex-pdp | partitioner.ignore.keys = false policy-apex-pdp | receive.buffer.bytes = 32768 policy-db-migrator | Waiting for mariadb port 3306... mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp grafana | logger=migrator t=2024-01-17T23:14:25.409926637Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=536.218µs policy-pap | Waiting for mariadb port 3306... policy-apex-pdp | reconnect.backoff.max.ms = 1000 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | mariadb | 2024-01-17 23:14:24+00:00 [Note] [Entrypoint]: Stopping temporary server grafana | logger=migrator t=2024-01-17T23:14:25.414568826Z level=info msg="Executing migration" id="Add is_service_account column to user" policy-pap | mariadb (172.17.0.2:3306) open policy-apex-pdp | reconnect.backoff.ms = 50 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-01-17 23:14:24 0 [Note] mariadbd (initiated by: unknown): Normal shutdown kafka | offsets.topic.compression.codec = 0 grafana | logger=migrator t=2024-01-17T23:14:25.415390338Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=821.532µs policy-pap | Waiting for kafka port 9092... policy-apex-pdp | request.timeout.ms = 30000 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-01-17 23:14:24 0 [Note] InnoDB: FTS optimize thread exiting. kafka | offsets.topic.num.partitions = 50 grafana | logger=migrator t=2024-01-17T23:14:25.417942706Z level=info msg="Executing migration" id="Update is_service_account column to nullable" grafana | logger=migrator t=2024-01-17T23:14:25.425075942Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.132866ms policy-apex-pdp | retries = 2147483647 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-01-17 23:14:24 0 [Note] InnoDB: Starting shutdown... kafka | offsets.topic.replication.factor = 1 grafana | logger=migrator t=2024-01-17T23:14:25.478377469Z level=info msg="Executing migration" id="create temp user table v1-7" grafana | logger=migrator t=2024-01-17T23:14:25.478983368Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=607.649µs policy-apex-pdp | retry.backoff.ms = 100 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-01-17 23:14:24 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool kafka | offsets.topic.segment.bytes = 104857600 grafana | logger=migrator t=2024-01-17T23:14:25.485305741Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" grafana | logger=migrator t=2024-01-17T23:14:25.486106153Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=804.032µs policy-apex-pdp | sasl.client.callback.handler.class = null policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused mariadb | 2024-01-17 23:14:24 0 [Note] InnoDB: Buffer pool(s) dump completed at 240117 23:14:24 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding grafana | logger=migrator t=2024-01-17T23:14:25.489482334Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" grafana | logger=migrator t=2024-01-17T23:14:25.48995117Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=468.206µs policy-apex-pdp | sasl.jaas.config = null policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" kafka | password.encoder.iterations = 4096 grafana | logger=migrator t=2024-01-17T23:14:25.493587884Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" grafana | logger=migrator t=2024-01-17T23:14:25.494253903Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=665.429µs policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | 321 blocks mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Shutdown completed; log sequence number 327895; transaction id 298 kafka | password.encoder.key.length = 128 grafana | logger=migrator t=2024-01-17T23:14:25.499504491Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" grafana | logger=migrator t=2024-01-17T23:14:25.500207361Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=702.98µs policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | Preparing upgrade release version: 0800 mariadb | 2024-01-17 23:14:25 0 [Note] mariadbd: Shutdown complete kafka | password.encoder.keyfactory.algorithm = null grafana | logger=migrator t=2024-01-17T23:14:25.503920427Z level=info msg="Executing migration" id="Update temp_user table charset" grafana | logger=migrator t=2024-01-17T23:14:25.503987228Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=67.731µs policy-apex-pdp | sasl.kerberos.service.name = null policy-db-migrator | Preparing upgrade release version: 0900 mariadb | kafka | password.encoder.old.secret = null grafana | logger=migrator t=2024-01-17T23:14:25.506399924Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" grafana | logger=migrator t=2024-01-17T23:14:25.507306327Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=906.313µs policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | Preparing upgrade release version: 1000 mariadb | 2024-01-17 23:14:25+00:00 [Note] [Entrypoint]: Temporary server stopped kafka | password.encoder.secret = null grafana | logger=migrator t=2024-01-17T23:14:25.511939635Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" grafana | logger=migrator t=2024-01-17T23:14:25.512586385Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=646.5µs policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | Preparing upgrade release version: 1100 mariadb | kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder grafana | logger=migrator t=2024-01-17T23:14:25.515577039Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" grafana | logger=migrator t=2024-01-17T23:14:25.516180098Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=602.759µs policy-apex-pdp | sasl.login.callback.handler.class = null policy-db-migrator | Preparing upgrade release version: 1200 mariadb | 2024-01-17 23:14:25+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. kafka | process.roles = [] grafana | logger=migrator t=2024-01-17T23:14:25.51832102Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" grafana | logger=migrator t=2024-01-17T23:14:25.518987759Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=668.89µs policy-apex-pdp | sasl.login.class = null policy-db-migrator | Preparing upgrade release version: 1300 mariadb | kafka | producer.id.expiration.check.interval.ms = 600000 grafana | logger=migrator t=2024-01-17T23:14:25.523857121Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" grafana | logger=migrator t=2024-01-17T23:14:25.527376353Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.518822ms policy-apex-pdp | sasl.login.connect.timeout.ms = null policy-db-migrator | Done mariadb | 2024-01-17 23:14:25 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... kafka | producer.id.expiration.ms = 86400000 grafana | logger=migrator t=2024-01-17T23:14:25.530036122Z level=info msg="Executing migration" id="create temp_user v2" grafana | logger=migrator t=2024-01-17T23:14:25.530727182Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=690.66µs policy-apex-pdp | sasl.login.read.timeout.ms = null policy-db-migrator | name version mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 kafka | producer.purgatory.purge.interval.requests = 1000 grafana | logger=migrator t=2024-01-17T23:14:25.535467293Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" grafana | logger=migrator t=2024-01-17T23:14:25.536170513Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=700.81µs policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | policyadmin 0 mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Number of transaction pools: 1 kafka | queued.max.request.bytes = -1 grafana | logger=migrator t=2024-01-17T23:14:25.543624563Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" grafana | logger=migrator t=2024-01-17T23:14:25.545270028Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.645175ms policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions kafka | queued.max.requests = 500 grafana | logger=migrator t=2024-01-17T23:14:25.549029833Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" grafana | logger=migrator t=2024-01-17T23:14:25.550252861Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.223088ms policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | upgrade: 0 -> 1300 mariadb | 2024-01-17 23:14:25 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) kafka | quota.window.num = 11 grafana | logger=migrator t=2024-01-17T23:14:25.55417761Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" grafana | logger=migrator t=2024-01-17T23:14:25.555362087Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.183687ms policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | mariadb | 2024-01-17 23:14:25 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) kafka | quota.window.size.seconds = 1 grafana | logger=migrator t=2024-01-17T23:14:25.559075601Z level=info msg="Executing migration" id="copy temp_user v1 to v2" grafana | logger=migrator t=2024-01-17T23:14:25.559424307Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=348.486µs policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql mariadb | 2024-01-17 23:14:25 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 grafana | logger=migrator t=2024-01-17T23:14:25.564461231Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" grafana | logger=migrator t=2024-01-17T23:14:25.564926439Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=465.078µs policy-apex-pdp | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB kafka | remote.log.manager.task.interval.ms = 30000 policy-pap | kafka (172.17.0.8:9092) open grafana | logger=migrator t=2024-01-17T23:14:25.568114256Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" policy-apex-pdp | sasl.mechanism = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Completed initialization of buffer pool kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 policy-pap | Waiting for api port 6969... grafana | logger=migrator t=2024-01-17T23:14:25.568529952Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=415.376µs policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) policy-pap | api (172.17.0.7:6969) open grafana | logger=migrator t=2024-01-17T23:14:25.572926426Z level=info msg="Executing migration" id="create star table" policy-apex-pdp | sasl.oauthbearer.expected.audience = null policy-db-migrator | mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: 128 rollback segments are active. kafka | remote.log.manager.task.retry.backoff.ms = 500 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml grafana | logger=migrator t=2024-01-17T23:14:25.573728199Z level=info msg="Migration successfully executed" id="create star table" duration=802.033µs policy-apex-pdp | sasl.oauthbearer.expected.issuer = null policy-db-migrator | mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... kafka | remote.log.manager.task.retry.jitter = 0.2 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json grafana | logger=migrator t=2024-01-17T23:14:25.579471354Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. kafka | remote.log.manager.thread.pool.size = 10 policy-pap | grafana | logger=migrator t=2024-01-17T23:14:25.580177294Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=705.66µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: log sequence number 327895; transaction id 299 kafka | remote.log.metadata.manager.class.name = null policy-pap | . ____ _ __ _ _ grafana | logger=migrator t=2024-01-17T23:14:25.583637245Z level=info msg="Executing migration" id="create org table v1" policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool kafka | remote.log.metadata.manager.class.path = null policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ grafana | logger=migrator t=2024-01-17T23:14:25.584353515Z level=info msg="Migration successfully executed" id="create org table v1" duration=716.3µs policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- mariadb | 2024-01-17 23:14:25 0 [Note] Plugin 'FEEDBACK' is disabled. kafka | remote.log.metadata.manager.impl.prefix = null policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ grafana | logger=migrator t=2024-01-17T23:14:25.588873312Z level=info msg="Executing migration" id="create index UQE_org_name - v1" policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | mariadb | 2024-01-17 23:14:25 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. kafka | remote.log.metadata.manager.listener.name = null policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) grafana | logger=migrator t=2024-01-17T23:14:25.589622893Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=748.691µs policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | mariadb | 2024-01-17 23:14:25 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. kafka | remote.log.reader.max.pending.tasks = 100 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / grafana | logger=migrator t=2024-01-17T23:14:25.592920693Z level=info msg="Executing migration" id="create org_user table v1" policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql mariadb | 2024-01-17 23:14:25 0 [Note] Server socket created on IP: '0.0.0.0'. kafka | remote.log.reader.threads = 10 policy-pap | =========|_|==============|___/=/_/_/_/ grafana | logger=migrator t=2024-01-17T23:14:25.593662243Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=740.19µs grafana | logger=migrator t=2024-01-17T23:14:25.599932796Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" policy-apex-pdp | security.protocol = PLAINTEXT mariadb | 2024-01-17 23:14:25 0 [Note] Server socket created on IP: '::'. kafka | remote.log.storage.manager.class.name = null policy-pap | :: Spring Boot :: (v3.1.4) grafana | logger=migrator t=2024-01-17T23:14:25.601483509Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.550603ms grafana | logger=migrator t=2024-01-17T23:14:25.607002171Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" policy-apex-pdp | security.providers = null mariadb | 2024-01-17 23:14:25 0 [Note] mariadbd: ready for connections. kafka | remote.log.storage.manager.class.path = null policy-pap | grafana | logger=migrator t=2024-01-17T23:14:25.607758892Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=758.961µs grafana | logger=migrator t=2024-01-17T23:14:25.612485752Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" policy-apex-pdp | send.buffer.bytes = 131072 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution kafka | remote.log.storage.manager.impl.prefix = null policy-pap | [2024-01-17T23:14:54.504+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.9 with PID 37 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) grafana | logger=migrator t=2024-01-17T23:14:25.613221522Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=735.75µs policy-db-migrator | -------------- policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 mariadb | 2024-01-17 23:14:25 0 [Note] InnoDB: Buffer pool(s) load completed at 240117 23:14:25 kafka | remote.log.storage.system.enable = false policy-pap | [2024-01-17T23:14:54.505+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" grafana | logger=migrator t=2024-01-17T23:14:25.618811465Z level=info msg="Executing migration" id="Update org table charset" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 mariadb | 2024-01-17 23:14:25 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) policy-pap | [2024-01-17T23:14:56.328+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. grafana | logger=migrator t=2024-01-17T23:14:25.618838865Z level=info msg="Migration successfully executed" id="Update org table charset" duration=28.67µs policy-db-migrator | -------------- policy-apex-pdp | ssl.cipher.suites = null policy-pap | [2024-01-17T23:14:56.436+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 95 ms. Found 7 JPA repository interfaces. grafana | logger=migrator t=2024-01-17T23:14:25.627337481Z level=info msg="Executing migration" id="Update org_user table charset" policy-db-migrator | policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | replica.fetch.backoff.ms = 1000 mariadb | 2024-01-17 23:14:25 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) policy-pap | [2024-01-17T23:14:56.899+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-01-17T23:14:25.627467323Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=92.931µs policy-db-migrator | policy-apex-pdp | ssl.endpoint.identification.algorithm = https kafka | replica.fetch.max.bytes = 1048576 mariadb | 2024-01-17 23:14:26 59 [Warning] Aborted connection 59 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) policy-pap | [2024-01-17T23:14:56.899+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler grafana | logger=migrator t=2024-01-17T23:14:25.631633914Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql policy-apex-pdp | ssl.engine.factory.class = null kafka | replica.fetch.min.bytes = 1 mariadb | 2024-01-17 23:14:28 94 [Warning] Aborted connection 94 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) policy-pap | [2024-01-17T23:14:57.675+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) grafana | logger=migrator t=2024-01-17T23:14:25.631970359Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=337.045µs policy-db-migrator | -------------- policy-apex-pdp | ssl.key.password = null kafka | replica.fetch.response.max.bytes = 10485760 policy-pap | [2024-01-17T23:14:57.684+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] grafana | logger=migrator t=2024-01-17T23:14:25.634313454Z level=info msg="Executing migration" id="create dashboard table" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) policy-apex-pdp | ssl.keymanager.algorithm = SunX509 kafka | replica.fetch.wait.max.ms = 500 policy-pap | [2024-01-17T23:14:57.686+00:00|INFO|StandardService|main] Starting service [Tomcat] grafana | logger=migrator t=2024-01-17T23:14:25.635345199Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.032705ms policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.certificate.chain = null kafka | replica.high.watermark.checkpoint.interval.ms = 5000 policy-pap | [2024-01-17T23:14:57.686+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.16] grafana | logger=migrator t=2024-01-17T23:14:25.637959998Z level=info msg="Executing migration" id="add index dashboard.account_id" policy-db-migrator | policy-apex-pdp | ssl.keystore.key = null kafka | replica.lag.time.max.ms = 30000 policy-pap | [2024-01-17T23:14:57.772+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext grafana | logger=migrator t=2024-01-17T23:14:25.638977773Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.016995ms policy-db-migrator | policy-apex-pdp | ssl.keystore.location = null kafka | replica.selector.class = null policy-pap | [2024-01-17T23:14:57.772+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3188 ms grafana | logger=migrator t=2024-01-17T23:14:25.643535541Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql policy-apex-pdp | ssl.keystore.password = null kafka | replica.socket.receive.buffer.bytes = 65536 policy-pap | [2024-01-17T23:14:58.246+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] grafana | logger=migrator t=2024-01-17T23:14:25.644310722Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=774.881µs policy-db-migrator | -------------- policy-apex-pdp | ssl.keystore.type = JKS kafka | replica.socket.timeout.ms = 30000 policy-pap | [2024-01-17T23:14:58.334+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 grafana | logger=migrator t=2024-01-17T23:14:25.648130858Z level=info msg="Executing migration" id="create dashboard_tag table" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) policy-apex-pdp | ssl.protocol = TLSv1.3 kafka | replication.quota.window.num = 11 policy-pap | [2024-01-17T23:14:58.338+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer grafana | logger=migrator t=2024-01-17T23:14:25.648729397Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=598.549µs policy-db-migrator | -------------- policy-apex-pdp | ssl.provider = null kafka | replication.quota.window.size.seconds = 1 policy-pap | [2024-01-17T23:14:58.392+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled grafana | logger=migrator t=2024-01-17T23:14:25.651827733Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" policy-db-migrator | policy-apex-pdp | ssl.secure.random.implementation = null kafka | request.timeout.ms = 30000 policy-pap | [2024-01-17T23:14:58.743+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer grafana | logger=migrator t=2024-01-17T23:14:25.652885998Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.056485ms policy-db-migrator | policy-apex-pdp | ssl.trustmanager.algorithm = PKIX kafka | reserved.broker.max.id = 1000 policy-pap | [2024-01-17T23:14:58.763+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... grafana | logger=migrator t=2024-01-17T23:14:25.657801701Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql policy-apex-pdp | ssl.truststore.certificates = null kafka | sasl.client.callback.handler.class = null policy-pap | [2024-01-17T23:14:58.871+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2b03d52f grafana | logger=migrator t=2024-01-17T23:14:25.658943648Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.116336ms policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.location = null kafka | sasl.enabled.mechanisms = [GSSAPI] policy-pap | [2024-01-17T23:14:58.873+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. grafana | logger=migrator t=2024-01-17T23:14:25.661901252Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) policy-apex-pdp | ssl.truststore.password = null kafka | sasl.jaas.config = null policy-pap | [2024-01-17T23:14:58.902+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) grafana | logger=migrator t=2024-01-17T23:14:25.666418199Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.516517ms policy-db-migrator | -------------- policy-apex-pdp | ssl.truststore.type = JKS kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-pap | [2024-01-17T23:14:58.903+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead grafana | logger=migrator t=2024-01-17T23:14:25.669072598Z level=info msg="Executing migration" id="create dashboard v2" policy-db-migrator | policy-apex-pdp | transaction.timeout.ms = 60000 kafka | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | [2024-01-17T23:15:00.811+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) grafana | logger=migrator t=2024-01-17T23:14:25.669661596Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=588.518µs policy-db-migrator | policy-apex-pdp | transactional.id = null kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] policy-pap | [2024-01-17T23:15:00.814+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' grafana | logger=migrator t=2024-01-17T23:14:25.674316326Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | sasl.kerberos.service.name = null policy-pap | [2024-01-17T23:15:01.438+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository grafana | logger=migrator t=2024-01-17T23:14:25.675026446Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=709.99µs policy-db-migrator | -------------- policy-apex-pdp | kafka | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | [2024-01-17T23:15:02.265+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository grafana | logger=migrator t=2024-01-17T23:14:25.678139042Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:06.355+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-pap | [2024-01-17T23:15:02.361+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository grafana | logger=migrator t=2024-01-17T23:14:25.679702215Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.563073ms policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.379+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | sasl.login.callback.handler.class = null policy-pap | [2024-01-17T23:15:02.661+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: grafana | logger=migrator t=2024-01-17T23:14:25.683054605Z level=info msg="Executing migration" id="copy dashboard v1 to v2" policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.379+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | sasl.login.class = null policy-pap | allow.auto.create.topics = true grafana | logger=migrator t=2024-01-17T23:14:25.683527911Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=473.236µs policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.379+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533306379 kafka | sasl.login.connect.timeout.ms = null policy-pap | auto.commit.interval.ms = 5000 grafana | logger=migrator t=2024-01-17T23:14:25.686650218Z level=info msg="Executing migration" id="drop table dashboard_v1" policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql policy-apex-pdp | [2024-01-17T23:15:06.379+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=139ed967-c71b-4bf1-928d-911e3abc9ce9, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created kafka | sasl.login.read.timeout.ms = null policy-pap | auto.include.jmx.reporter = true grafana | logger=migrator t=2024-01-17T23:14:25.687396988Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=746.791µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.379+00:00|INFO|ServiceManager|main] service manager starting set alive kafka | sasl.login.refresh.buffer.seconds = 300 policy-pap | auto.offset.reset = latest grafana | logger=migrator t=2024-01-17T23:14:25.691642922Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) policy-apex-pdp | [2024-01-17T23:15:06.379+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object kafka | sasl.login.refresh.min.period.seconds = 60 policy-pap | bootstrap.servers = [kafka:9092] grafana | logger=migrator t=2024-01-17T23:14:25.691705452Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=63.381µs policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.381+00:00|INFO|ServiceManager|main] service manager starting topic sinks kafka | sasl.login.refresh.window.factor = 0.8 policy-pap | check.crcs = true grafana | logger=migrator t=2024-01-17T23:14:25.693957756Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.381+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher kafka | sasl.login.refresh.window.jitter = 0.05 policy-pap | client.dns.lookup = use_all_dns_ips grafana | logger=migrator t=2024-01-17T23:14:25.696502564Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.536497ms policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.383+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener kafka | sasl.login.retry.backoff.max.ms = 10000 policy-pap | client.id = consumer-093ff4e0-f365-4742-90a8-254a3129a143-1 grafana | logger=migrator t=2024-01-17T23:14:25.701373745Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql policy-apex-pdp | [2024-01-17T23:15:06.383+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher kafka | sasl.login.retry.backoff.ms = 100 policy-pap | client.rack = grafana | logger=migrator t=2024-01-17T23:14:25.703863172Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.490317ms policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.384+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher kafka | sasl.mechanism.controller.protocol = GSSAPI policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-17T23:14:25.706727024Z level=info msg="Executing migration" id="Add column gnetId in dashboard" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:06.384+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4041dc88-5007-445a-911f-3e52b8d238d9, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4ee37ca3 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-17T23:14:25.708922287Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.194853ms policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.384+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4041dc88-5007-445a-911f-3e52b8d238d9, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted kafka | sasl.oauthbearer.clock.skew.seconds = 30 policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-01-17T23:14:25.71181743Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.384+00:00|INFO|ServiceManager|main] service manager starting Create REST server kafka | sasl.oauthbearer.expected.audience = null policy-pap | exclude.internal.topics = true grafana | logger=migrator t=2024-01-17T23:14:25.71257466Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=756.781µs policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.401+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: kafka | sasl.oauthbearer.expected.issuer = null policy-pap | fetch.max.bytes = 52428800 grafana | logger=migrator t=2024-01-17T23:14:25.716968946Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql policy-apex-pdp | [] kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-17T23:14:25.718681491Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.712324ms policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.403+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"ecfc7ed0-53c3-4d7f-88b0-5a363de3d2d6","timestampMs":1705533306385,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} grafana | logger=migrator t=2024-01-17T23:14:25.721386651Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-apex-pdp | [2024-01-17T23:15:06.719+00:00|INFO|ServiceManager|main] service manager starting Rest Server policy-apex-pdp | [2024-01-17T23:15:06.719+00:00|INFO|ServiceManager|main] service manager starting grafana | logger=migrator t=2024-01-17T23:14:25.722161003Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=774.112µs policy-db-migrator | -------------- kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-apex-pdp | [2024-01-17T23:15:06.719+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters policy-apex-pdp | [2024-01-17T23:15:06.720+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-01-17T23:14:25.724860513Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" policy-db-migrator | kafka | sasl.oauthbearer.jwks.endpoint.url = null policy-apex-pdp | [2024-01-17T23:15:06.744+00:00|INFO|ServiceManager|main] service manager started policy-apex-pdp | [2024-01-17T23:15:06.744+00:00|INFO|ServiceManager|main] service manager started policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:25.725597193Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=736.43µs kafka | sasl.oauthbearer.scope.claim.name = scope policy-apex-pdp | [2024-01-17T23:15:06.745+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. policy-pap | fetch.min.bytes = 1 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql grafana | logger=migrator t=2024-01-17T23:14:25.730323663Z level=info msg="Executing migration" id="Update dashboard table charset" kafka | sasl.oauthbearer.sub.claim.name = sub policy-pap | group.id = 093ff4e0-f365-4742-90a8-254a3129a143 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.745+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@71a9b4c7{STOPPED}[11.0.18,sto=0], context=o.e.j.s.ServletContextHandler@4628b1d3{/,null,STOPPED}, connector=RestServerParameters@6a1d204a{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-2755d705==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@5eb35687{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-18cc679e==org.glassfish.jersey.servlet.ServletContainer@fbed57a2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING grafana | logger=migrator t=2024-01-17T23:14:25.730351734Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=30.201µs kafka | sasl.oauthbearer.token.endpoint.url = null policy-pap | group.instance.id = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:06.859+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.732999063Z level=info msg="Executing migration" id="Update dashboard_tag table charset" policy-pap | heartbeat.interval.ms = 3000 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.860+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TCpMGCYeSECduTbHgcA3wg kafka | sasl.server.callback.handler.class = null grafana | logger=migrator t=2024-01-17T23:14:25.733054774Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=56.361µs policy-pap | interceptor.classes = [] policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.861+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 kafka | sasl.server.max.receive.size = 524288 grafana | logger=migrator t=2024-01-17T23:14:25.735867765Z level=info msg="Executing migration" id="Add column folder_id in dashboard" policy-pap | internal.leave.group.on.close = true policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:06.862+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | security.inter.broker.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-17T23:14:25.737899495Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.03135ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql policy-apex-pdp | [2024-01-17T23:15:06.864+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Cluster ID: TCpMGCYeSECduTbHgcA3wg kafka | security.providers = null grafana | logger=migrator t=2024-01-17T23:14:25.741905365Z level=info msg="Executing migration" id="Add column isFolder in dashboard" policy-pap | isolation.level = read_uncommitted policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:06.957+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | server.max.startup.time.ms = 9223372036854775807 grafana | logger=migrator t=2024-01-17T23:14:25.743762041Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.856257ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:06.983+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | socket.connection.setup.timeout.max.ms = 30000 grafana | logger=migrator t=2024-01-17T23:14:25.762954275Z level=info msg="Executing migration" id="Add column has_acl in dashboard" policy-pap | max.partition.fetch.bytes = 1048576 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:07.071+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | socket.connection.setup.timeout.ms = 10000 grafana | logger=migrator t=2024-01-17T23:14:25.767416111Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=4.471846ms policy-pap | max.poll.interval.ms = 300000 policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:07.084+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | socket.listen.backlog.size = 50 grafana | logger=migrator t=2024-01-17T23:14:25.770337955Z level=info msg="Executing migration" id="Add column uid in dashboard" policy-pap | max.poll.records = 500 policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:07.175+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | socket.receive.buffer.bytes = 102400 grafana | logger=migrator t=2024-01-17T23:14:25.772172092Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.833737ms policy-pap | metadata.max.age.ms = 300000 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql policy-apex-pdp | [2024-01-17T23:15:07.197+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | socket.request.max.bytes = 104857600 grafana | logger=migrator t=2024-01-17T23:14:25.774951533Z level=info msg="Executing migration" id="Update uid column values in dashboard" policy-pap | metric.reporters = [] policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:07.300+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | socket.send.buffer.bytes = 102400 grafana | logger=migrator t=2024-01-17T23:14:25.775156006Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=204.593µs policy-pap | metrics.num.samples = 2 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:07.308+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 7 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.cipher.suites = [] grafana | logger=migrator t=2024-01-17T23:14:25.779476359Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" policy-pap | metrics.recording.level = INFO policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:07.410+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | ssl.client.auth = none grafana | logger=migrator t=2024-01-17T23:14:25.78020209Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=725.461µs policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:07.417+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-17T23:14:25.78290305Z level=info msg="Executing migration" id="Remove unique index org_id_slug" policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:07.479+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls kafka | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-17T23:14:25.783846354Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=941.914µs policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql policy-apex-pdp | [2024-01-17T23:15:07.481+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls kafka | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-17T23:14:25.78695065Z level=info msg="Executing migration" id="Update dashboard title length" policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:07.519+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 9 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.key.password = null grafana | logger=migrator t=2024-01-17T23:14:25.787021501Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=71.841µs policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:07.520+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.791887923Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:07.625+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.keymanager.algorithm = SunX509 policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-17T23:14:25.79301022Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.122907ms policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:07.625+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | ssl.keystore.certificate.chain = null policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-17T23:14:25.797011068Z level=info msg="Executing migration" id="create dashboard_provisioning" policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:07.728+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | ssl.keystore.key = null policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-17T23:14:25.797651569Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=640.261µs policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql policy-apex-pdp | [2024-01-17T23:15:07.731+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 11 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.keystore.location = null policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-17T23:14:25.800396689Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:07.834+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | ssl.keystore.password = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-17T23:14:25.807702987Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.305598ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:07.837+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.keystore.type = JKS policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-17T23:14:25.812261954Z level=info msg="Executing migration" id="create dashboard_provisioning v2" policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:07.939+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | ssl.principal.mapping.rules = DEFAULT policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-17T23:14:25.812872653Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=642.679µs policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:07.941+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 13 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.protocol = TLSv1.3 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-17T23:14:25.815980179Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:08.044+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | ssl.provider = null policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-17T23:14:25.816696239Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=716.06µs policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql policy-apex-pdp | [2024-01-17T23:15:08.047+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.secure.random.implementation = null policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-17T23:14:25.820902742Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:08.149+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 15 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.trustmanager.algorithm = PKIX policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-17T23:14:25.822866291Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.966919ms policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:08.156+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | ssl.truststore.certificates = null policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-17T23:14:25.825779684Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:08.254+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.826102129Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=322.375µs policy-pap | sasl.login.read.timeout.ms = null kafka | ssl.truststore.location = null policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:08.262+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.829658641Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | ssl.truststore.password = null policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:08.358+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 17 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.83029412Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=635.369µs policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | ssl.truststore.type = JKS policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql policy-apex-pdp | [2024-01-17T23:15:08.367+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.833821343Z level=info msg="Executing migration" id="Add check_sum column" policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:08.464+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.835937274Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.115771ms policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | transaction.max.timeout.ms = 900000 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:08.470+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.842042904Z level=info msg="Executing migration" id="Add index for dashboard_title" policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:08.575+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 19 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.842800736Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=754.672µs policy-pap | sasl.login.retry.backoff.ms = 100 kafka | transaction.state.log.load.buffer.size = 5242880 policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:08.577+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.84579214Z level=info msg="Executing migration" id="delete tags for deleted dashboards" policy-pap | sasl.mechanism = GSSAPI kafka | transaction.state.log.min.isr = 2 policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:08.680+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.846039964Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=248.024µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | transaction.state.log.num.partitions = 50 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql policy-apex-pdp | [2024-01-17T23:15:08.683+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.848814985Z level=info msg="Executing migration" id="delete stars for deleted dashboards" policy-pap | sasl.oauthbearer.expected.audience = null kafka | transaction.state.log.replication.factor = 3 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:08.784+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.849028908Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=214.133µs policy-pap | sasl.oauthbearer.expected.issuer = null kafka | transaction.state.log.segment.bytes = 104857600 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-apex-pdp | [2024-01-17T23:15:08.784+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 21 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.853367902Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | transactional.id.expiration.ms = 604800000 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:08.913+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.854130883Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=762.911µs policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | unclean.leader.election.enable = false policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:08.914+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.857062827Z level=info msg="Executing migration" id="Add isPublic for dashboard" policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | unstable.api.versions.enable = false policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:09.025+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.859159068Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.095941ms policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | zookeeper.clientCnxnSocket = null policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql policy-apex-pdp | [2024-01-17T23:15:09.026+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 23 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.862291434Z level=info msg="Executing migration" id="create data_source table" policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | zookeeper.connect = zookeeper:2181 policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:09.128+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.863094456Z level=info msg="Migration successfully executed" id="create data_source table" duration=802.662µs policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | zookeeper.connection.timeout.ms = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-17T23:14:25.867480791Z level=info msg="Executing migration" id="add index data_source.account_id" kafka | zookeeper.max.in.flight.requests = 10 policy-apex-pdp | [2024-01-17T23:15:09.131+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-17T23:14:25.868874441Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.38743ms kafka | zookeeper.metadata.migration.enable = false policy-db-migrator | -------------- policy-apex-pdp | [2024-01-17T23:15:09.234+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 25 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | security.providers = null grafana | logger=migrator t=2024-01-17T23:14:25.872322872Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" kafka | zookeeper.session.timeout.ms = 18000 policy-db-migrator | policy-apex-pdp | [2024-01-17T23:15:09.237+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-pap | send.buffer.bytes = 131072 grafana | logger=migrator t=2024-01-17T23:14:25.874317701Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=2.001679ms kafka | zookeeper.set.acl = false policy-db-migrator | policy-pap | session.timeout.ms = 45000 policy-apex-pdp | [2024-01-17T23:15:09.339+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.877452018Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" kafka | zookeeper.ssl.cipher.suites = null policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-apex-pdp | [2024-01-17T23:15:09.342+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.878195348Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=743.1µs kafka | zookeeper.ssl.client.enable = false policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.ms = 10000 policy-apex-pdp | [2024-01-17T23:15:09.444+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.883666799Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" kafka | zookeeper.ssl.crl.enable = false policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | ssl.cipher.suites = null policy-apex-pdp | [2024-01-17T23:15:09.446+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 27 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.884548703Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=885.024µs kafka | zookeeper.ssl.enabled.protocols = null policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-apex-pdp | [2024-01-17T23:15:09.548+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.88912671Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS policy-db-migrator | policy-pap | ssl.endpoint.identification.algorithm = https policy-apex-pdp | [2024-01-17T23:15:09.553+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.894979776Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.852806ms kafka | zookeeper.ssl.keystore.location = null policy-db-migrator | policy-pap | ssl.engine.factory.class = null policy-apex-pdp | [2024-01-17T23:15:09.653+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.900365186Z level=info msg="Executing migration" id="create data_source table v2" kafka | zookeeper.ssl.keystore.password = null policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql policy-pap | ssl.key.password = null policy-apex-pdp | [2024-01-17T23:15:09.656+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 29 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.902103132Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.735356ms kafka | zookeeper.ssl.keystore.type = null policy-db-migrator | -------------- policy-pap | ssl.keymanager.algorithm = SunX509 policy-apex-pdp | [2024-01-17T23:15:09.758+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.907518502Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" kafka | zookeeper.ssl.ocsp.enable = false policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) policy-pap | ssl.keystore.certificate.chain = null policy-apex-pdp | [2024-01-17T23:15:09.761+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.908387105Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=865.663µs kafka | zookeeper.ssl.protocol = TLSv1.2 policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null policy-apex-pdp | [2024-01-17T23:15:09.866+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.911649813Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" kafka | zookeeper.ssl.truststore.location = null policy-db-migrator | policy-pap | ssl.keystore.location = null policy-apex-pdp | [2024-01-17T23:15:09.866+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 31 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.913378649Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.727416ms kafka | zookeeper.ssl.truststore.password = null policy-db-migrator | policy-pap | ssl.keystore.password = null policy-apex-pdp | [2024-01-17T23:15:09.968+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.916805619Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" kafka | zookeeper.ssl.truststore.type = null policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql policy-pap | ssl.keystore.type = JKS policy-apex-pdp | [2024-01-17T23:15:09.971+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.917823565Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.017456ms kafka | (kafka.server.KafkaConfig) policy-db-migrator | -------------- policy-pap | ssl.protocol = TLSv1.3 policy-apex-pdp | [2024-01-17T23:15:10.072+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 33 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.923960475Z level=info msg="Executing migration" id="Add column with_credentials" kafka | [2024-01-17 23:14:33,111] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) policy-pap | ssl.provider = null policy-apex-pdp | [2024-01-17T23:15:10.076+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.92631116Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.350315ms kafka | [2024-01-17 23:14:33,111] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | -------------- policy-pap | ssl.secure.random.implementation = null policy-apex-pdp | [2024-01-17T23:15:10.179+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.931993854Z level=info msg="Executing migration" id="Add secure json data column" kafka | [2024-01-17 23:14:33,112] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-db-migrator | policy-pap | ssl.trustmanager.algorithm = PKIX policy-apex-pdp | [2024-01-17T23:15:10.180+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.940693563Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=8.700529ms policy-db-migrator | kafka | [2024-01-17 23:14:33,114] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) policy-pap | ssl.truststore.certificates = null policy-apex-pdp | [2024-01-17T23:15:10.282+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.943758858Z level=info msg="Executing migration" id="Update data_source table charset" policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql kafka | [2024-01-17 23:14:33,149] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) policy-pap | ssl.truststore.location = null policy-apex-pdp | [2024-01-17T23:15:10.285+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 35 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.943784668Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=23.18µs policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,155] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) policy-pap | ssl.truststore.password = null policy-apex-pdp | [2024-01-17T23:15:10.386+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.946030232Z level=info msg="Executing migration" id="Update initial version to 1" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) kafka | [2024-01-17 23:14:33,164] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) policy-pap | ssl.truststore.type = JKS policy-apex-pdp | [2024-01-17T23:15:10.390+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.946200804Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=176.052µs policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,165] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-apex-pdp | [2024-01-17T23:15:10.490+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.950846492Z level=info msg="Executing migration" id="Add read_only data column" policy-db-migrator | kafka | [2024-01-17 23:14:33,166] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) policy-pap | policy-apex-pdp | [2024-01-17T23:15:10.496+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 37 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.953112716Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.266214ms policy-db-migrator | kafka | [2024-01-17 23:14:33,196] INFO Starting the log cleaner (kafka.log.LogCleaner) policy-pap | [2024-01-17T23:15:02.818+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 policy-apex-pdp | [2024-01-17T23:15:10.598+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.957853436Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql kafka | [2024-01-17 23:14:33,240] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) policy-pap | [2024-01-17T23:15:02.819+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-apex-pdp | [2024-01-17T23:15:10.601+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.958381374Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=527.588µs policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,255] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) policy-pap | [2024-01-17T23:15:02.819+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533302817 policy-apex-pdp | [2024-01-17T23:15:10.703+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.960921932Z level=info msg="Executing migration" id="Update json_data with nulls" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) kafka | [2024-01-17 23:14:33,280] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) policy-pap | [2024-01-17T23:15:02.821+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-1, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Subscribed to topic(s): policy-pdp-pap policy-apex-pdp | [2024-01-17T23:15:10.707+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 39 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.961070524Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=155.892µs policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,303] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | [2024-01-17T23:15:02.821+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-apex-pdp | [2024-01-17T23:15:10.808+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.966755508Z level=info msg="Executing migration" id="Add uid column" policy-db-migrator | kafka | [2024-01-17 23:14:33,697] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | allow.auto.create.topics = true policy-apex-pdp | [2024-01-17T23:15:10.812+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.970717086Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.961548ms policy-db-migrator | kafka | [2024-01-17 23:14:33,720] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) policy-pap | auto.commit.interval.ms = 5000 policy-apex-pdp | [2024-01-17T23:15:10.919+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 41 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:25.974273059Z level=info msg="Executing migration" id="Update uid value" policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql kafka | [2024-01-17 23:14:33,721] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) policy-pap | auto.include.jmx.reporter = true policy-apex-pdp | [2024-01-17T23:15:10.921+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.974459382Z level=info msg="Migration successfully executed" id="Update uid value" duration=186.643µs policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,738] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) policy-pap | auto.offset.reset = latest policy-apex-pdp | [2024-01-17T23:15:11.022+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.981494255Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) policy-pap | bootstrap.servers = [kafka:9092] policy-apex-pdp | [2024-01-17T23:15:11.026+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.98241977Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=917.985µs policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,742] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) policy-pap | check.crcs = true policy-apex-pdp | [2024-01-17T23:15:11.127+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 43 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.988005902Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" policy-db-migrator | kafka | [2024-01-17 23:14:33,765] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | client.dns.lookup = use_all_dns_ips policy-apex-pdp | [2024-01-17T23:15:11.129+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.988799624Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=787.462µs policy-db-migrator | kafka | [2024-01-17 23:14:33,787] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | client.id = consumer-policy-pap-2 policy-apex-pdp | [2024-01-17T23:15:11.231+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.994301195Z level=info msg="Executing migration" id="create api_key table" policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql kafka | [2024-01-17 23:14:33,789] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | client.rack = policy-apex-pdp | [2024-01-17T23:15:11.257+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:25.995417461Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.115976ms policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,790] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-17T23:14:26.02034061Z level=info msg="Executing migration" id="add index api_key.account_id" policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) kafka | [2024-01-17 23:14:33,806] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) policy-pap | default.api.timeout.ms = 60000 policy-apex-pdp | [2024-01-17T23:15:11.335+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 45 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:26.025141061Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=4.800491ms policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,827] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) policy-pap | enable.auto.commit = true policy-apex-pdp | [2024-01-17T23:15:11.361+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:26.032947917Z level=info msg="Executing migration" id="add index api_key.key" policy-db-migrator | kafka | [2024-01-17 23:14:33,884] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1705533273841,1705533273841,1,0,0,72057610610802689,258,0,27 policy-pap | exclude.internal.topics = true policy-apex-pdp | [2024-01-17T23:15:11.440+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:26.03390069Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=952.283µs policy-db-migrator | kafka | (kafka.zk.KafkaZkClient) policy-pap | fetch.max.bytes = 52428800 policy-apex-pdp | [2024-01-17T23:15:11.464+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:26.037856108Z level=info msg="Executing migration" id="add index api_key.account_id_name" policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql kafka | [2024-01-17 23:14:33,885] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) policy-pap | fetch.max.wait.ms = 500 grafana | logger=migrator t=2024-01-17T23:14:26.038675441Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=819.013µs policy-apex-pdp | [2024-01-17T23:15:11.543+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 47 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-01-17 23:14:33,988] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) policy-pap | fetch.min.bytes = 1 grafana | logger=migrator t=2024-01-17T23:14:26.043325949Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" policy-apex-pdp | [2024-01-17T23:15:11.588+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-17 23:14:33,995] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | group.id = policy-pap grafana | logger=migrator t=2024-01-17T23:14:26.04402875Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=700.881µs policy-apex-pdp | [2024-01-17T23:15:11.645+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-01-17 23:14:34,001] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | group.instance.id = null grafana | logger=migrator t=2024-01-17T23:14:26.04809042Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" policy-apex-pdp | [2024-01-17T23:15:11.695+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:14:34,007] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) policy-pap | heartbeat.interval.ms = 3000 grafana | logger=migrator t=2024-01-17T23:14:26.049098895Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.013784ms policy-apex-pdp | [2024-01-17T23:15:11.752+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 49 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:14:34,023] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-17T23:14:26.055398838Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" policy-apex-pdp | [2024-01-17T23:15:11.849+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql kafka | [2024-01-17 23:14:34,028] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) policy-pap | internal.leave.group.on.close = true grafana | logger=migrator t=2024-01-17T23:14:26.056625706Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.226628ms policy-apex-pdp | [2024-01-17T23:15:11.854+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-01-17 23:14:34,036] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false grafana | logger=migrator t=2024-01-17T23:14:26.062160458Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" policy-apex-pdp | [2024-01-17T23:15:11.952+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-17 23:14:34,037] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:14:34,041] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.071901902Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=9.742394ms policy-apex-pdp | [2024-01-17T23:15:11.970+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 51 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-01-17 23:14:34,056] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2024-01-17 23:14:34,191] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) grafana | logger=migrator t=2024-01-17T23:14:26.077374612Z level=info msg="Executing migration" id="create api_key table v2" policy-apex-pdp | [2024-01-17T23:15:12.056+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-pap | isolation.level = read_uncommitted kafka | [2024-01-17 23:14:34,193] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) grafana | logger=migrator t=2024-01-17T23:14:26.0778237Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=446.968µs policy-apex-pdp | [2024-01-17T23:15:12.075+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-17 23:14:34,194] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) grafana | logger=migrator t=2024-01-17T23:14:26.082179854Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" policy-apex-pdp | [2024-01-17T23:15:12.160+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-01-17 23:14:34,224] INFO [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). (kafka.server.metadata.ZkMetadataCache) grafana | logger=migrator t=2024-01-17T23:14:26.083441343Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.261349ms policy-apex-pdp | [2024-01-17T23:15:12.178+00:00|WARN|NetworkClient|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Error while fetching metadata with correlation id 53 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- policy-pap | max.poll.interval.ms = 300000 kafka | [2024-01-17 23:14:34,224] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.092486406Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" policy-apex-pdp | [2024-01-17T23:15:12.271+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) policy-pap | max.poll.records = 500 kafka | [2024-01-17 23:14:34,228] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.093541392Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.058686ms policy-apex-pdp | [2024-01-17T23:15:12.284+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] (Re-)joining group policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-17 23:14:34,231] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.098434264Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" policy-apex-pdp | [2024-01-17T23:15:12.331+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Request joining group due to: need to re-join with the given member-id: consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2-e780fe82-c0bb-4c48-83e1-8a127e9c91dc policy-db-migrator | kafka | [2024-01-17 23:14:34,232] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) grafana | logger=migrator t=2024-01-17T23:14:26.099355317Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=921.353µs policy-apex-pdp | [2024-01-17T23:15:12.332+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-pap | metric.reporters = [] kafka | [2024-01-17 23:14:34,249] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.103406428Z level=info msg="Executing migration" id="copy api_key v1 to v2" policy-apex-pdp | [2024-01-17T23:15:12.332+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] (Re-)joining group policy-pap | metrics.num.samples = 2 policy-db-migrator | kafka | [2024-01-17 23:14:34,260] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) grafana | logger=migrator t=2024-01-17T23:14:26.103746733Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=340.095µs policy-apex-pdp | [2024-01-17T23:15:15.357+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2-e780fe82-c0bb-4c48-83e1-8a127e9c91dc', protocol='range'} policy-pap | metrics.recording.level = INFO policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql kafka | [2024-01-17 23:14:34,268] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.105892445Z level=info msg="Executing migration" id="Drop old table api_key_v1" policy-apex-pdp | [2024-01-17T23:15:15.365+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Finished assignment for group at generation 1: {consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2-e780fe82-c0bb-4c48-83e1-8a127e9c91dc=Assignment(partitions=[policy-pdp-pap-0])} policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | [2024-01-17 23:14:34,273] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.106464523Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=572.628µs policy-apex-pdp | [2024-01-17T23:15:15.390+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2-e780fe82-c0bb-4c48-83e1-8a127e9c91dc', protocol='range'} policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) kafka | [2024-01-17 23:14:34,282] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) grafana | logger=migrator t=2024-01-17T23:14:26.110749086Z level=info msg="Executing migration" id="Update api_key table charset" policy-apex-pdp | [2024-01-17T23:15:15.391+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-pap | receive.buffer.bytes = 65536 policy-db-migrator | -------------- kafka | [2024-01-17 23:14:34,286] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) grafana | logger=migrator t=2024-01-17T23:14:26.110772606Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=24.37µs policy-apex-pdp | [2024-01-17T23:15:15.397+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Adding newly assigned partitions: policy-pdp-pap-0 policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | kafka | [2024-01-17 23:14:34,287] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-01-17T23:14:26.114881017Z level=info msg="Executing migration" id="Add expires to api_key table" policy-apex-pdp | [2024-01-17T23:15:15.413+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Found no committed offset for partition policy-pdp-pap-0 policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | kafka | [2024-01-17 23:14:34,294] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) grafana | logger=migrator t=2024-01-17T23:14:26.119063679Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=4.181632ms policy-apex-pdp | [2024-01-17T23:15:15.435+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2, groupId=4041dc88-5007-445a-911f-3e52b8d238d9] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-pap | request.timeout.ms = 30000 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql kafka | [2024-01-17 23:14:34,302] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.124427968Z level=info msg="Executing migration" id="Add service account foreign key" policy-apex-pdp | [2024-01-17T23:15:26.383+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-01-17 23:14:34,302] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.126974106Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.546008ms policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"354ee0e2-4bc6-4c3c-be39-81f934f0f052","timestampMs":1705533326383,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) kafka | [2024-01-17 23:14:34,303] INFO Kafka version: 7.5.3-ccs (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-01-17T23:14:26.131202439Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" policy-apex-pdp | [2024-01-17T23:15:26.414+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- kafka | [2024-01-17 23:14:34,303] INFO Kafka commitId: 9090b26369455a2f335fbb5487fb89675ee406ab (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-01-17T23:14:26.131378491Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=176.382µs policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"354ee0e2-4bc6-4c3c-be39-81f934f0f052","timestampMs":1705533326383,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | kafka | [2024-01-17 23:14:34,303] INFO Kafka startTimeMs: 1705533274298 (org.apache.kafka.common.utils.AppInfoParser) grafana | logger=migrator t=2024-01-17T23:14:26.134622479Z level=info msg="Executing migration" id="Add last_used_at to api_key table" grafana | logger=migrator t=2024-01-17T23:14:26.137158587Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.538248ms policy-apex-pdp | [2024-01-17T23:15:26.417+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.143847925Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" grafana | logger=migrator t=2024-01-17T23:14:26.146728488Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.883953ms kafka | [2024-01-17 23:14:34,303] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) policy-apex-pdp | [2024-01-17T23:15:26.550+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql grafana | logger=migrator t=2024-01-17T23:14:26.153135483Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" grafana | logger=migrator t=2024-01-17T23:14:26.153751142Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=615.579µs kafka | [2024-01-17 23:14:34,304] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) policy-apex-pdp | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","timestampMs":1705533326500,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.156871308Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" grafana | logger=migrator t=2024-01-17T23:14:26.157412936Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=541.748µs kafka | [2024-01-17 23:14:34,304] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) policy-apex-pdp | [2024-01-17T23:15:26.562+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-17T23:14:26.161631449Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" grafana | logger=migrator t=2024-01-17T23:14:26.162358969Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=727.3µs kafka | [2024-01-17 23:14:34,305] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) policy-apex-pdp | [2024-01-17T23:15:26.562+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.196738127Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" grafana | logger=migrator t=2024-01-17T23:14:26.198671866Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.937439ms kafka | [2024-01-17 23:14:34,309] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"86e417ee-ba90-4082-b783-a8cb76967993","timestampMs":1705533326562,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.205364995Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" grafana | logger=migrator t=2024-01-17T23:14:26.206650024Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.28543ms kafka | [2024-01-17 23:14:34,309] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) policy-apex-pdp | [2024-01-17T23:15:26.564+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.209674749Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" grafana | logger=migrator t=2024-01-17T23:14:26.210935997Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.260798ms kafka | [2024-01-17 23:14:34,310] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d6679e7c-c185-41fd-b2b2-bbf8515deade","timestampMs":1705533326564,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql grafana | logger=migrator t=2024-01-17T23:14:26.213742738Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" grafana | logger=migrator t=2024-01-17T23:14:26.213809569Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=67.071µs kafka | [2024-01-17 23:14:34,310] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) policy-apex-pdp | [2024-01-17T23:15:26.578+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.233209756Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" grafana | logger=migrator t=2024-01-17T23:14:26.233260317Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=57.171µs kafka | [2024-01-17 23:14:34,312] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"86e417ee-ba90-4082-b783-a8cb76967993","timestampMs":1705533326562,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) grafana | logger=migrator t=2024-01-17T23:14:26.237045012Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" grafana | logger=migrator t=2024-01-17T23:14:26.241854584Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.805802ms kafka | [2024-01-17 23:14:34,316] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) policy-apex-pdp | [2024-01-17T23:15:26.578+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.244783147Z level=info msg="Executing migration" id="Add encrypted dashboard json column" grafana | logger=migrator t=2024-01-17T23:14:26.247641149Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.856672ms kafka | [2024-01-17 23:14:34,322] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | [2024-01-17T23:15:26.582+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.254157936Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" grafana | logger=migrator t=2024-01-17T23:14:26.254241518Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=84.411µs kafka | [2024-01-17 23:14:34,323] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d6679e7c-c185-41fd-b2b2-bbf8515deade","timestampMs":1705533326564,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.260346438Z level=info msg="Executing migration" id="create quota table v1" grafana | logger=migrator t=2024-01-17T23:14:26.261389723Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.039015ms kafka | [2024-01-17 23:14:34,327] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | [2024-01-17T23:15:26.582+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql grafana | logger=migrator t=2024-01-17T23:14:26.268246015Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 kafka | [2024-01-17 23:14:34,328] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) policy-apex-pdp | [2024-01-17T23:15:26.627+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-17 23:14:34,328] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) grafana | logger=migrator t=2024-01-17T23:14:26.269151218Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=909.364µs policy-apex-pdp | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c14afc04-dda2-444d-acd0-3073b0ca56f2","timestampMs":1705533326501,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | [2024-01-17 23:14:34,329] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) grafana | logger=migrator t=2024-01-17T23:14:26.272283714Z level=info msg="Executing migration" id="Update quota table charset" policy-apex-pdp | [2024-01-17T23:15:26.629+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-17 23:14:34,332] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) grafana | logger=migrator t=2024-01-17T23:14:26.272312504Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.56µs policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c14afc04-dda2-444d-acd0-3073b0ca56f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"226abeb0-4de8-4f87-ac6f-32138fbc9058","timestampMs":1705533326629,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null kafka | [2024-01-17 23:14:34,333] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.278369404Z level=info msg="Executing migration" id="create plugin_setting table" policy-apex-pdp | [2024-01-17T23:15:26.640+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | sasl.login.class = null kafka | [2024-01-17 23:14:34,335] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) grafana | logger=migrator t=2024-01-17T23:14:26.279350999Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=982.176µs policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c14afc04-dda2-444d-acd0-3073b0ca56f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"226abeb0-4de8-4f87-ac6f-32138fbc9058","timestampMs":1705533326629,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql policy-pap | sasl.login.connect.timeout.ms = null kafka | [2024-01-17 23:14:34,347] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.283235506Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" policy-apex-pdp | [2024-01-17T23:15:26.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | -------------- policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-17 23:14:34,347] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.284380622Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.242838ms policy-apex-pdp | [2024-01-17T23:15:26.673+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-17 23:14:34,347] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.293373536Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" policy-apex-pdp | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b9138845-cc56-497c-8dee-71da850e574b","timestampMs":1705533326653,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-17 23:14:34,348] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.297665899Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.294433ms policy-apex-pdp | [2024-01-17T23:15:26.675+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-17 23:14:34,350] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.304100964Z level=info msg="Executing migration" id="Update plugin_setting table charset" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b9138845-cc56-497c-8dee-71da850e574b","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1096889e-4f82-44c6-9d25-d8a47fb87433","timestampMs":1705533326675,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-17 23:14:34,400] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.304141394Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=42.44µs policy-apex-pdp | [2024-01-17T23:15:26.682+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | > upgrade 0450-pdpgroup.sql policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-17 23:14:34,414] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.306316307Z level=info msg="Executing migration" id="create session table" policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b9138845-cc56-497c-8dee-71da850e574b","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1096889e-4f82-44c6-9d25-d8a47fb87433","timestampMs":1705533326675,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-17 23:14:34,480] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-17T23:14:26.306970116Z level=info msg="Migration successfully executed" id="create session table" duration=652.999µs policy-apex-pdp | [2024-01-17T23:15:26.683+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) policy-pap | sasl.mechanism = GSSAPI kafka | [2024-01-17 23:14:34,480] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) grafana | logger=migrator t=2024-01-17T23:14:26.311425452Z level=info msg="Executing migration" id="Drop old table playlist table" policy-apex-pdp | [2024-01-17T23:15:56.149+00:00|INFO|RequestLog|qtp830863979-33] 172.17.0.3 - policyadmin [17/Jan/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.49.1" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-17 23:14:39,401] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.311502163Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=76.861µs policy-apex-pdp | [2024-01-17T23:16:56.079+00:00|INFO|RequestLog|qtp830863979-28] 172.17.0.3 - policyadmin [17/Jan/2024:23:16:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.49.1" policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null kafka | [2024-01-17 23:14:39,401] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.314136192Z level=info msg="Executing migration" id="Drop old table playlist_item table" policy-db-migrator | policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-17 23:15:05,602] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-01-17T23:14:26.314316485Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=182.013µs policy-db-migrator | > upgrade 0460-pdppolicystatus.sql policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 kafka | [2024-01-17 23:15:05,602] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) grafana | logger=migrator t=2024-01-17T23:14:26.318187612Z level=info msg="Executing migration" id="create playlist table v2" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-01-17 23:15:05,681] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.319311049Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.123157ms policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 kafka | [2024-01-17 23:15:05,739] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) grafana | logger=migrator t=2024-01-17T23:14:26.322961663Z level=info msg="Executing migration" id="create playlist item table v2" policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-pap | sasl.oauthbearer.scope.claim.name = scope kafka | [2024-01-17 23:15:05,889] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(TB_lmqBXRfuqVYs70rfOKA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(ZZVFVp_CTPq7ZebUsmWrBQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) policy-db-migrator | policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-01-17 23:15:05,891] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) policy-db-migrator | policy-pap | security.protocol = PLAINTEXT policy-pap | security.providers = null kafka | [2024-01-17 23:15:05,893] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0470-pdp.sql policy-pap | send.buffer.bytes = 131072 policy-pap | session.timeout.ms = 45000 kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | ssl.cipher.suites = null policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https policy-pap | ssl.engine.factory.class = null kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.key.password = null policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null policy-pap | ssl.keystore.key = null kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0480-pdpstatistics.sql policy-pap | ssl.keystore.location = null policy-pap | ssl.keystore.password = null kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) policy-pap | ssl.provider = null policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX policy-pap | ssl.truststore.certificates = null kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.location = null policy-pap | ssl.truststore.password = null kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.type = JKS policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-17 23:15:05,894] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql policy-pap | policy-pap | [2024-01-17T23:15:02.827+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-17 23:15:05,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:02.827+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a policy-pap | [2024-01-17T23:15:02.827+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533302827 kafka | [2024-01-17 23:15:05,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-17T23:15:02.827+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-pap | [2024-01-17T23:15:03.147+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-17 23:15:05,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | [2024-01-17T23:15:03.306+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning policy-pap | [2024-01-17T23:15:03.542+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@1c3b221f, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@750c23a3, org.springframework.security.web.context.SecurityContextHolderFilter@11d422fd, org.springframework.security.web.header.HeaderWriterFilter@4866e0a7, org.springframework.security.web.authentication.logout.LogoutFilter@55cb3b7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@71d2261e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1331d6fd, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@5b1f0f26, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@238280df, org.springframework.security.web.access.ExceptionTranslationFilter@17e6d07b, org.springframework.security.web.access.intercept.AuthorizationFilter@5bf1b528] policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:04.870+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' policy-pap | [2024-01-17T23:15:04.931+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] kafka | [2024-01-17 23:15:05,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-17T23:15:04.965+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' policy-pap | [2024-01-17T23:15:04.983+00:00|INFO|ServiceManager|main] Policy PAP starting kafka | [2024-01-17 23:15:05,895] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | [2024-01-17T23:15:04.983+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry policy-pap | [2024-01-17T23:15:04.984+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0500-pdpsubgroup.sql policy-pap | [2024-01-17T23:15:04.985+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener policy-pap | [2024-01-17T23:15:04.985+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:04.986+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher policy-pap | [2024-01-17T23:15:04.986+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) policy-pap | [2024-01-17T23:15:04.992+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=093ff4e0-f365-4742-90a8-254a3129a143, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1e3a2177 policy-pap | [2024-01-17T23:15:05.002+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=093ff4e0-f365-4742-90a8-254a3129a143, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:05.002+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: policy-pap | allow.auto.create.topics = true kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | auto.commit.interval.ms = 5000 policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | policy-pap | auto.offset.reset = latest policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql policy-pap | check.crcs = true policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- policy-pap | client.id = consumer-093ff4e0-f365-4742-90a8-254a3129a143-3 policy-pap | client.rack = kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.323683183Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=721.52µs kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | default.api.timeout.ms = 60000 grafana | logger=migrator t=2024-01-17T23:14:26.330482314Z level=info msg="Executing migration" id="Update playlist table charset" policy-db-migrator | kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | enable.auto.commit = true grafana | logger=migrator t=2024-01-17T23:14:26.330514105Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=32.831µs policy-db-migrator | kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-pap | exclude.internal.topics = true kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.334192369Z level=info msg="Executing migration" id="Update playlist_item table charset" policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql policy-pap | fetch.max.bytes = 52428800 kafka | [2024-01-17 23:15:05,896] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.33426976Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=26.43µs policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.340968329Z level=info msg="Executing migration" id="Add playlist column created_at" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) policy-pap | fetch.min.bytes = 1 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.346460911Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.503672ms policy-db-migrator | -------------- policy-pap | group.id = 093ff4e0-f365-4742-90a8-254a3129a143 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.350910766Z level=info msg="Executing migration" id="Add playlist column updated_at" policy-db-migrator | policy-pap | group.instance.id = null kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.353815199Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.904333ms policy-db-migrator | policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.358572789Z level=info msg="Executing migration" id="drop preferences table v2" policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql policy-pap | interceptor.classes = [] kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.358656361Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=84.122µs policy-db-migrator | -------------- policy-pap | internal.leave.group.on.close = true kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.363225599Z level=info msg="Executing migration" id="drop preferences table v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.36331176Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=86.532µs policy-db-migrator | -------------- policy-pap | isolation.level = read_uncommitted kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.366559008Z level=info msg="Executing migration" id="create preferences table v3" policy-db-migrator | policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.367793485Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.234237ms policy-db-migrator | policy-pap | max.partition.fetch.bytes = 1048576 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.373983777Z level=info msg="Executing migration" id="Update preferences table charset" policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql policy-pap | max.poll.interval.ms = 300000 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.374056328Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=74.901µs policy-db-migrator | -------------- policy-pap | max.poll.records = 500 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.377408768Z level=info msg="Executing migration" id="Add column team_id in preferences" policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.38293306Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.523522ms grafana | logger=migrator t=2024-01-17T23:14:26.386196208Z level=info msg="Executing migration" id="Update team_id column values in preferences" kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.3863404Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=144.482µs grafana | logger=migrator t=2024-01-17T23:14:26.389007969Z level=info msg="Executing migration" id="Add column week_start in preferences" kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.392136396Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.128306ms grafana | logger=migrator t=2024-01-17T23:14:26.399694548Z level=info msg="Executing migration" id="Add column preferences.json_data" kafka | [2024-01-17 23:15:05,897] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.405742117Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=6.03847ms grafana | logger=migrator t=2024-01-17T23:14:26.410188312Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" kafka | [2024-01-17 23:15:05,898] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql grafana | logger=migrator t=2024-01-17T23:14:26.410258083Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=70.261µs grafana | logger=migrator t=2024-01-17T23:14:26.413074695Z level=info msg="Executing migration" id="Add preferences index org_id" kafka | [2024-01-17 23:15:05,898] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.414208112Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.133037ms grafana | logger=migrator t=2024-01-17T23:14:26.420787009Z level=info msg="Executing migration" id="Add preferences index user_id" kafka | [2024-01-17 23:15:05,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.42221429Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.427781ms grafana | logger=migrator t=2024-01-17T23:14:26.426442222Z level=info msg="Executing migration" id="create alert table v1" kafka | [2024-01-17 23:15:05,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.427419046Z level=info msg="Migration successfully executed" id="create alert table v1" duration=976.914µs grafana | logger=migrator t=2024-01-17T23:14:26.430458972Z level=info msg="Executing migration" id="add index alert org_id & id " kafka | [2024-01-17 23:15:05,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metric.reporters = [] kafka | [2024-01-17 23:15:05,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | metrics.num.samples = 2 policy-pap | metrics.recording.level = INFO policy-db-migrator | kafka | [2024-01-17 23:15:05,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql policy-pap | receive.buffer.bytes = 65536 policy-pap | reconnect.backoff.max.ms = 1000 kafka | [2024-01-17 23:15:05,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) policy-pap | reconnect.backoff.ms = 50 policy-pap | request.timeout.ms = 30000 kafka | [2024-01-17 23:15:05,903] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | retry.backoff.ms = 100 policy-pap | sasl.client.callback.handler.class = null kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.jaas.config = null policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-pap | sasl.kerberos.service.name = null kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.login.callback.handler.class = null policy-pap | sasl.login.class = null kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0570-toscadatatype.sql policy-pap | sasl.login.connect.timeout.ms = null policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-pap | sasl.login.retry.backoff.ms = 100 kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.mechanism = GSSAPI policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 kafka | [2024-01-17 23:15:05,904] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | sasl.oauthbearer.expected.audience = null policy-pap | sasl.oauthbearer.expected.issuer = null kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0580-toscadatatypes.sql policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-pap | sasl.oauthbearer.sub.claim.name = sub kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-pap | security.protocol = PLAINTEXT kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | security.providers = null policy-pap | send.buffer.bytes = 131072 kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | session.timeout.ms = 45000 policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql policy-pap | socket.connection.setup.timeout.ms = 10000 policy-pap | ssl.cipher.suites = null kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] grafana | logger=migrator t=2024-01-17T23:14:26.431470597Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.015995ms kafka | [2024-01-17 23:15:05,905] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | ssl.endpoint.identification.algorithm = https grafana | logger=migrator t=2024-01-17T23:14:26.43844747Z level=info msg="Executing migration" id="add index alert state" kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.engine.factory.class = null grafana | logger=migrator t=2024-01-17T23:14:26.441472264Z level=info msg="Migration successfully executed" id="add index alert state" duration=3.024765ms kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-17T23:14:26.448540099Z level=info msg="Executing migration" id="add index alert dashboard_id" kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 grafana | logger=migrator t=2024-01-17T23:14:26.449457682Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=916.913µs kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0600-toscanodetemplate.sql policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-17T23:14:26.454291694Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.key = null grafana | logger=migrator t=2024-01-17T23:14:26.455113876Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=821.842µs kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) policy-pap | ssl.keystore.location = null grafana | logger=migrator t=2024-01-17T23:14:26.458306583Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.keystore.password = null grafana | logger=migrator t=2024-01-17T23:14:26.4594Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.092437ms kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.keystore.type = JKS grafana | logger=migrator t=2024-01-17T23:14:26.464247752Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 grafana | logger=migrator t=2024-01-17T23:14:26.465081544Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=833.182µs kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0610-toscanodetemplates.sql policy-pap | ssl.provider = null grafana | logger=migrator t=2024-01-17T23:14:26.471216154Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.secure.random.implementation = null grafana | logger=migrator t=2024-01-17T23:14:26.488118854Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=16.8918ms kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-17T23:14:26.492377567Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" kafka | [2024-01-17 23:15:05,906] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-17T23:14:26.492845444Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=468.427µs kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-17T23:14:26.497572053Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-01-17T23:14:26.498542558Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=977.455µs kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-17T23:14:26.503540232Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer grafana | logger=migrator t=2024-01-17T23:14:26.503824757Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=284.764µs kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | grafana | logger=migrator t=2024-01-17T23:14:26.506346634Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:05.008+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-17T23:14:26.506884642Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=537.948µs kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) policy-db-migrator | policy-pap | [2024-01-17T23:15:05.009+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.51013652Z level=info msg="Executing migration" id="create alert_notification table v1" policy-db-migrator | policy-pap | [2024-01-17T23:15:05.009+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533305008 kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.511065973Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=929.343µs policy-db-migrator | > upgrade 0630-toscanodetype.sql policy-pap | [2024-01-17T23:15:05.009+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Subscribed to topic(s): policy-pdp-pap kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.521498327Z level=info msg="Executing migration" id="Add column is_default" policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:05.009+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.526064845Z level=info msg="Migration successfully executed" id="Add column is_default" duration=4.495777ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) policy-pap | [2024-01-17T23:15:05.009+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f3f25057-810f-40c1-bb05-67d949feb974, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@61555218 kafka | [2024-01-17 23:15:05,907] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.531111489Z level=info msg="Executing migration" id="Add column frequency" policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:05.010+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f3f25057-810f-40c1-bb05-67d949feb974, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting kafka | [2024-01-17 23:15:05,907] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.535838849Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.74159ms policy-db-migrator | policy-pap | [2024-01-17T23:15:05.010+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.539557244Z level=info msg="Executing migration" id="Add column send_reminder" policy-db-migrator | policy-pap | allow.auto.create.topics = true kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.543228269Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.670785ms policy-db-migrator | > upgrade 0640-toscanodetypes.sql policy-pap | auto.commit.interval.ms = 5000 kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.548104641Z level=info msg="Executing migration" id="Add column disable_resolve_message" policy-db-migrator | -------------- policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.552424535Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=4.319283ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) policy-pap | auto.offset.reset = latest kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.561756982Z level=info msg="Executing migration" id="add index alert_notification org_id & name" policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.562653926Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=896.734µs policy-db-migrator | policy-pap | check.crcs = true kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.565999625Z level=info msg="Executing migration" id="Update alert table charset" policy-db-migrator | policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.566065616Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=67.621µs policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql policy-pap | client.id = consumer-policy-pap-4 kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.570914488Z level=info msg="Executing migration" id="Update alert_notification table charset" policy-db-migrator | -------------- policy-pap | client.rack = kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.570958059Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=46.251µs policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | connections.max.idle.ms = 540000 kafka | [2024-01-17 23:15:06,542] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.576502991Z level=info msg="Executing migration" id="create notification_journal table v1" policy-db-migrator | -------------- policy-pap | default.api.timeout.ms = 60000 kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.578057013Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.550762ms policy-db-migrator | policy-pap | enable.auto.commit = true kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.582485979Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" policy-db-migrator | policy-pap | exclude.internal.topics = true kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.583657726Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.171007ms policy-db-migrator | > upgrade 0660-toscaparameter.sql policy-pap | fetch.max.bytes = 52428800 kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.624338098Z level=info msg="Executing migration" id="drop alert_notification_journal" policy-db-migrator | -------------- policy-pap | fetch.max.wait.ms = 500 kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.625953391Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.612303ms policy-pap | fetch.min.bytes = 1 kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) grafana | logger=migrator t=2024-01-17T23:14:26.632104502Z level=info msg="Executing migration" id="create alert_notification_state table v1" policy-pap | group.id = policy-pap kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.632945645Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=843.853µs policy-pap | group.instance.id = null kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.63599182Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" policy-pap | heartbeat.interval.ms = 3000 kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.636935284Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=946.413µs policy-pap | interceptor.classes = [] kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0670-toscapolicies.sql grafana | logger=migrator t=2024-01-17T23:14:26.640991534Z level=info msg="Executing migration" id="Add for to alert table" policy-pap | internal.leave.group.on.close = true policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.645654722Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.663438ms policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.64888429Z level=info msg="Executing migration" id="Add column uid in alert_notification" policy-pap | isolation.level = read_uncommitted policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.652518894Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.634634ms policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | max.partition.fetch.bytes = 1048576 grafana | logger=migrator t=2024-01-17T23:14:26.655660951Z level=info msg="Executing migration" id="Update uid column values in alert_notification" policy-db-migrator | kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | max.poll.interval.ms = 300000 grafana | logger=migrator t=2024-01-17T23:14:26.655864464Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=205.523µs policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | max.poll.records = 500 grafana | logger=migrator t=2024-01-17T23:14:26.661185952Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-17T23:14:26.662372619Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.182837ms policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-17T23:14:26.668746294Z level=info msg="Executing migration" id="Remove unique index org_id_name" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-17T23:14:26.669610247Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=864.283µs policy-db-migrator | kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-17T23:14:26.672958306Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" policy-db-migrator | kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-17T23:14:26.67659032Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.631883ms policy-db-migrator | > upgrade 0690-toscapolicy.sql policy-db-migrator | -------------- policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) policy-pap | receive.buffer.bytes = 65536 grafana | logger=migrator t=2024-01-17T23:14:26.681901968Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | reconnect.backoff.max.ms = 1000 grafana | logger=migrator t=2024-01-17T23:14:26.68201399Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=112.572µs kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | reconnect.backoff.ms = 50 grafana | logger=migrator t=2024-01-17T23:14:26.685696755Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | request.timeout.ms = 30000 grafana | logger=migrator t=2024-01-17T23:14:26.686695809Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=998.594µs kafka | [2024-01-17 23:15:06,543] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0700-toscapolicytype.sql policy-pap | retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-17T23:14:26.690215931Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.client.callback.handler.class = null grafana | logger=migrator t=2024-01-17T23:14:26.691256097Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.042796ms kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) policy-pap | sasl.jaas.config = null grafana | logger=migrator t=2024-01-17T23:14:26.695941336Z level=info msg="Executing migration" id="Drop old annotation table v4" kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit grafana | logger=migrator t=2024-01-17T23:14:26.696062307Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=127.541µs kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.min.time.before.relogin = 60000 grafana | logger=migrator t=2024-01-17T23:14:26.700845138Z level=info msg="Executing migration" id="create annotation table v5" kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | policy-pap | sasl.kerberos.service.name = null grafana | logger=migrator t=2024-01-17T23:14:26.702482413Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.635735ms kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | > upgrade 0710-toscapolicytypes.sql policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 grafana | logger=migrator t=2024-01-17T23:14:26.710184166Z level=info msg="Executing migration" id="add index annotation 0 v3" kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 grafana | logger=migrator t=2024-01-17T23:14:26.71111018Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=927.284µs kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.715224621Z level=info msg="Executing migration" id="add index annotation 1 v3" kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-db-migrator | -------------- policy-pap | sasl.login.callback.handler.class = null grafana | logger=migrator t=2024-01-17T23:14:26.716331598Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.106787ms kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.class = null grafana | logger=migrator t=2024-01-17T23:14:26.719559715Z level=info msg="Executing migration" id="add index annotation 2 v3" policy-db-migrator | kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null grafana | logger=migrator t=2024-01-17T23:14:26.720459468Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=899.393µs policy-db-migrator | kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.724987986Z level=info msg="Executing migration" id="add index annotation 3 v3" policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql policy-pap | sasl.login.read.timeout.ms = null kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.72599559Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.001084ms policy-db-migrator | -------------- policy-pap | sasl.login.refresh.buffer.seconds = 300 kafka | [2024-01-17 23:15:06,544] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.730335765Z level=info msg="Executing migration" id="add index annotation 4 v3" policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | sasl.login.refresh.min.period.seconds = 60 kafka | [2024-01-17 23:15:06,546] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.731332239Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=996.504µs policy-db-migrator | -------------- policy-pap | sasl.login.refresh.window.factor = 0.8 kafka | [2024-01-17 23:15:06,546] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.734414175Z level=info msg="Executing migration" id="Update annotation table charset" policy-db-migrator | policy-pap | sasl.login.refresh.window.jitter = 0.05 kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.734441155Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=28.56µs policy-db-migrator | policy-pap | sasl.login.retry.backoff.max.ms = 10000 kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.741605301Z level=info msg="Executing migration" id="Add column region_id to annotation table" policy-db-migrator | > upgrade 0730-toscaproperty.sql kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.746465112Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.859241ms policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.75031734Z level=info msg="Executing migration" id="Drop category_id index" policy-pap | sasl.mechanism = GSSAPI policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.751226753Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=900.473µs policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.757568507Z level=info msg="Executing migration" id="Add column tags to annotation table" kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.762042213Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.473426ms kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql grafana | logger=migrator t=2024-01-17T23:14:26.765412523Z level=info msg="Executing migration" id="Create annotation_tag table v2" kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.766179214Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=766.161µs kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.770134653Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.771073096Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=938.263µs kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.774156752Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.779695794Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=5.538472ms policy-pap | sasl.oauthbearer.token.endpoint.url = null kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql grafana | logger=migrator t=2024-01-17T23:14:26.786662637Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" policy-pap | security.protocol = PLAINTEXT kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.805471505Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=18.808037ms policy-pap | security.providers = null kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.810887055Z level=info msg="Executing migration" id="Create annotation_tag table v3" policy-pap | send.buffer.bytes = 131072 kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.811448044Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=563.769µs policy-pap | session.timeout.ms = 45000 kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.81460767Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql grafana | logger=migrator t=2024-01-17T23:14:26.81531342Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=704.91µs kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.819264098Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) grafana | logger=migrator t=2024-01-17T23:14:26.819773366Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=509.378µs kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.825222796Z level=info msg="Executing migration" id="drop table annotation_tag_v2" kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) policy-pap | ssl.key.password = null grafana | logger=migrator t=2024-01-17T23:14:26.826856371Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.633235ms policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.832946641Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" policy-pap | ssl.keystore.certificate.chain = null grafana | logger=migrator t=2024-01-17T23:14:26.833141514Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=195.173µs policy-db-migrator | kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | > upgrade 0770-toscarequirement.sql kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) kafka | [2024-01-17 23:15:06,547] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.837162584Z level=info msg="Executing migration" id="Add created time to annotation table" kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.843919233Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.754169ms kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.848076085Z level=info msg="Executing migration" id="Add updated time to annotation table" kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.85248454Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.409005ms kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:26.856159374Z level=info msg="Executing migration" id="Add index for created in annotation table" policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | > upgrade 0780-toscarequirements.sql grafana | logger=migrator t=2024-01-17T23:14:26.857108158Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=948.944µs policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.86332579Z level=info msg="Executing migration" id="Add index for updated in annotation table" policy-pap | ssl.truststore.certificates = null kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) grafana | logger=migrator t=2024-01-17T23:14:26.864690871Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.373881ms policy-pap | ssl.truststore.location = null kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.901085479Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" policy-pap | ssl.truststore.password = null kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) policy-db-migrator | policy-pap | ssl.truststore.type = JKS kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.901587416Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=508.577µs policy-db-migrator | kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.951004386Z level=info msg="Executing migration" id="Add epoch_end column" policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.954223343Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.221557ms policy-pap | policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:26.960289834Z level=info msg="Executing migration" id="Add index for epoch_end" policy-pap | [2024-01-17T23:15:05.014+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) policy-pap | [2024-01-17T23:15:05.014+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.960998654Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=708.98µs policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:05.014+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533305014 kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.964459645Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" policy-db-migrator | kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.964666818Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=206.903µs policy-pap | [2024-01-17T23:15:05.014+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap policy-db-migrator | kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.968617317Z level=info msg="Executing migration" id="Move region to single row" policy-pap | [2024-01-17T23:15:05.015+00:00|INFO|ServiceManager|main] Policy PAP starting topics policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.968998632Z level=info msg="Migration successfully executed" id="Move region to single row" duration=381.245µs policy-pap | [2024-01-17T23:15:05.015+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=f3f25057-810f-40c1-bb05-67d949feb974, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.972778948Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" policy-pap | [2024-01-17T23:15:05.015+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=093ff4e0-f365-4742-90a8-254a3129a143, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.973497948Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=720.67µs policy-pap | [2024-01-17T23:15:05.015+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a55ecd13-b845-41e0-8cdd-8988f3bf7e46, alive=false, publisher=null]]: starting policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,548] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.97906152Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" policy-pap | [2024-01-17T23:15:05.029+00:00|INFO|ProducerConfig|main] ProducerConfig values: policy-db-migrator | kafka | [2024-01-17 23:15:06,549] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.980112847Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.051007ms policy-pap | acks = -1 policy-db-migrator | kafka | [2024-01-17 23:15:06,551] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.984640433Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" policy-pap | auto.include.jmx.reporter = true policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.985343224Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=702.831µs policy-pap | batch.size = 16384 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.993541365Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" policy-pap | bootstrap.servers = [kafka:9092] policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:26.99456332Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.021555ms policy-pap | buffer.memory = 33554432 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.04264212Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" policy-pap | client.dns.lookup = use_all_dns_ips policy-db-migrator | kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.043796957Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.163637ms policy-pap | client.id = producer-1 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.048028951Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) policy-pap | compression.type = none policy-db-migrator | > upgrade 0820-toscatrigger.sql grafana | logger=migrator t=2024-01-17T23:14:27.048674951Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=645.85µs kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | connections.max.idle.ms = 540000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.055406032Z level=info msg="Executing migration" id="Increase tags column to length 4096" kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) policy-pap | delivery.timeout.ms = 120000 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) policy-pap | enable.idempotence = true grafana | logger=migrator t=2024-01-17T23:14:27.055511614Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=105.662µs policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) policy-pap | interceptor.classes = [] grafana | logger=migrator t=2024-01-17T23:14:27.058188744Z level=info msg="Executing migration" id="create test_data table" policy-db-migrator | kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-01-17T23:14:27.058798683Z level=info msg="Migration successfully executed" id="create test_data table" duration=609.429µs policy-db-migrator | kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) policy-pap | linger.ms = 0 grafana | logger=migrator t=2024-01-17T23:14:27.061710397Z level=info msg="Executing migration" id="create dashboard_version table v1" policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.block.ms = 60000 grafana | logger=migrator t=2024-01-17T23:14:27.062324446Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=613.939µs policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.in.flight.requests.per.connection = 5 grafana | logger=migrator t=2024-01-17T23:14:27.071372481Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) policy-pap | max.request.size = 1048576 grafana | logger=migrator t=2024-01-17T23:14:27.073044757Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.671876ms policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metadata.max.age.ms = 300000 grafana | logger=migrator t=2024-01-17T23:14:27.080901925Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" policy-db-migrator | kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metadata.max.idle.ms = 300000 grafana | logger=migrator t=2024-01-17T23:14:27.082977316Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=2.074501ms policy-db-migrator | kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metric.reporters = [] grafana | logger=migrator t=2024-01-17T23:14:27.086306006Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-17 23:15:06,553] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.num.samples = 2 grafana | logger=migrator t=2024-01-17T23:14:27.087063868Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=758.021µs policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.recording.level = INFO grafana | logger=migrator t=2024-01-17T23:14:27.09056352Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) policy-pap | metrics.sample.window.ms = 30000 grafana | logger=migrator t=2024-01-17T23:14:27.091069847Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=506.077µs policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.adaptive.partitioning.enable = true grafana | logger=migrator t=2024-01-17T23:14:27.097508025Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.availability.timeout.ms = 0 grafana | logger=migrator t=2024-01-17T23:14:27.097634137Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=125.911µs policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.class = null grafana | logger=migrator t=2024-01-17T23:14:27.101115899Z level=info msg="Executing migration" id="create team table" policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) policy-pap | partitioner.ignore.keys = false policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.102337436Z level=info msg="Migration successfully executed" id="create team table" duration=1.221197ms policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.1078685Z level=info msg="Executing migration" id="add index team.org_id" policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.108978637Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.110307ms policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.117497765Z level=info msg="Executing migration" id="add unique index team_org_id_name" policy-pap | request.timeout.ms = 30000 policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.119038307Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.540922ms policy-pap | retries = 2147483647 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.124081493Z level=info msg="Executing migration" id="Add column uid in team" policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.127371593Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.286629ms policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.130964047Z level=info msg="Executing migration" id="Update uid column values in team" policy-pap | sasl.jaas.config = null policy-db-migrator | -------------- policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.131109169Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=145.432µs policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.136017082Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.137933041Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.919099ms policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.141255001Z level=info msg="Executing migration" id="create team member table" policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.142680733Z level=info msg="Migration successfully executed" id="create team member table" duration=1.425812ms policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.146539001Z level=info msg="Executing migration" id="add index team_member.org_id" policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.147380703Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=848.163µs policy-pap | sasl.login.class = null policy-db-migrator | kafka | [2024-01-17 23:15:06,554] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.151949632Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.153194641Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.243329ms policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.158538371Z level=info msg="Executing migration" id="add index team_member.team_id" policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.159393194Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=854.944µs policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.164930967Z level=info msg="Executing migration" id="Add column email to team table" policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.170808215Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.878848ms policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.176519961Z level=info msg="Executing migration" id="Add column external to team_member table" policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.184256188Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=7.731556ms policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.188239227Z level=info msg="Executing migration" id="Add column permission to team_member table" kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-17T23:14:27.194380429Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=6.142182ms kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) grafana | logger=migrator t=2024-01-17T23:14:27.202776025Z level=info msg="Executing migration" id="create dashboard acl table" kafka | [2024-01-17 23:15:06,555] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.203638169Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=862.124µs kafka | [2024-01-17 23:15:06,555] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.210503221Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" kafka | [2024-01-17 23:15:06,557] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.2123568Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.853369ms kafka | [2024-01-17 23:15:06,558] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-01-17T23:14:27.218205187Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" kafka | [2024-01-17 23:15:06,558] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.219240793Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.035606ms kafka | [2024-01-17 23:15:06,558] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) grafana | logger=migrator t=2024-01-17T23:14:27.224656654Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" kafka | [2024-01-17 23:15:06,558] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.226461341Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.807897ms kafka | [2024-01-17 23:15:06,558] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.232653284Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.23374333Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.089686ms kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.protocol = PLAINTEXT policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-01-17T23:14:27.239921433Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | security.providers = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.240877648Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=956.015µs kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | send.buffer.bytes = 131072 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) grafana | logger=migrator t=2024-01-17T23:14:27.246687785Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.max.ms = 30000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.247894943Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.206768ms kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | socket.connection.setup.timeout.ms = 10000 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.251604049Z level=info msg="Executing migration" id="add index dashboard_permission" kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.cipher.suites = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.252762796Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.158287ms kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-01-17T23:14:27.258401091Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.endpoint.identification.algorithm = https policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.25902896Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=633.649µs kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.engine.factory.class = null policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) grafana | logger=migrator t=2024-01-17T23:14:27.263929164Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.key.password = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.264219198Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=290.594µs kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keymanager.algorithm = SunX509 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.267423096Z level=info msg="Executing migration" id="create tag table" kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.certificate.chain = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.26828543Z level=info msg="Migration successfully executed" id="create tag table" duration=861.554µs kafka | [2024-01-17 23:15:06,559] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.key = null policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql grafana | logger=migrator t=2024-01-17T23:14:27.273101991Z level=info msg="Executing migration" id="add index tag.key_value" kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.location = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.274880258Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.777717ms kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.password = null policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) grafana | logger=migrator t=2024-01-17T23:14:27.283618409Z level=info msg="Executing migration" id="create login attempt table" kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.keystore.type = JKS policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.284432731Z level=info msg="Migration successfully executed" id="create login attempt table" duration=815.632µs kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.protocol = TLSv1.3 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.29233124Z level=info msg="Executing migration" id="add index login_attempt.username" kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.provider = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.293214243Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=881.983µs kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.secure.random.implementation = null policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.trustmanager.algorithm = PKIX grafana | logger=migrator t=2024-01-17T23:14:27.298133037Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.certificates = null grafana | logger=migrator t=2024-01-17T23:14:27.299036512Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=903.665µs policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.location = null grafana | logger=migrator t=2024-01-17T23:14:27.304665016Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.password = null grafana | logger=migrator t=2024-01-17T23:14:27.332611196Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=27.93406ms policy-db-migrator | kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | ssl.truststore.type = JKS grafana | logger=migrator t=2024-01-17T23:14:27.338416563Z level=info msg="Executing migration" id="create login_attempt v2" policy-db-migrator | kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | transaction.timeout.ms = 60000 grafana | logger=migrator t=2024-01-17T23:14:27.339131153Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=715.58µs policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | transactional.id = null grafana | logger=migrator t=2024-01-17T23:14:27.342866339Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer grafana | logger=migrator t=2024-01-17T23:14:27.344113369Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.246939ms policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-17 23:15:06,560] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | grafana | logger=migrator t=2024-01-17T23:14:27.351484769Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-17T23:15:05.039+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. grafana | logger=migrator t=2024-01-17T23:14:27.351881394Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=396.685µs policy-db-migrator | kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-17T23:15:05.054+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 grafana | logger=migrator t=2024-01-17T23:14:27.359335157Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" policy-db-migrator | kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-17T23:15:05.054+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a grafana | logger=migrator t=2024-01-17T23:14:27.359895465Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=560.838µs policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-17T23:15:05.054+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533305053 grafana | logger=migrator t=2024-01-17T23:14:27.363265076Z level=info msg="Executing migration" id="create user auth table" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) policy-pap | [2024-01-17T23:15:05.054+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a55ecd13-b845-41e0-8cdd-8988f3bf7e46, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created grafana | logger=migrator t=2024-01-17T23:14:27.364016047Z level=info msg="Migration successfully executed" id="create user auth table" duration=750.341µs policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | [2024-01-17T23:15:05.054+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=baac300c-bb13-4a62-87c2-50f6437f4257, alive=false, publisher=null]]: starting kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.368733067Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:05.054+00:00|INFO|ProducerConfig|main] ProducerConfig values: kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.369635252Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=901.985µs policy-db-migrator | policy-pap | acks = -1 kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.37352422Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" policy-db-migrator | policy-pap | auto.include.jmx.reporter = true kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.373628011Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=104.981µs policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql policy-pap | batch.size = 16384 kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.378294692Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" policy-db-migrator | -------------- policy-pap | bootstrap.servers = [kafka:9092] kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.383232616Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.937464ms policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | buffer.memory = 33554432 kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.389801534Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" policy-db-migrator | -------------- policy-pap | client.dns.lookup = use_all_dns_ips kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.396728748Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=6.925834ms policy-db-migrator | policy-pap | client.id = producer-2 kafka | [2024-01-17 23:15:06,561] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.400787329Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" policy-db-migrator | policy-pap | compression.type = none kafka | [2024-01-17 23:15:06,562] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.404350713Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.562954ms policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql policy-pap | connections.max.idle.ms = 540000 grafana | logger=migrator t=2024-01-17T23:14:27.408669208Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" policy-db-migrator | -------------- policy-pap | delivery.timeout.ms = 120000 kafka | [2024-01-17 23:15:06,562] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.412200621Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.531003ms policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | enable.idempotence = true kafka | [2024-01-17 23:15:06,562] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.463313268Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" policy-db-migrator | -------------- policy-pap | interceptor.classes = [] kafka | [2024-01-17 23:15:06,597] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.465482481Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=2.169293ms policy-db-migrator | policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-17 23:15:06,597] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.470620488Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" policy-db-migrator | policy-pap | linger.ms = 0 kafka | [2024-01-17 23:15:06,598] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.475843567Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.224509ms policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql policy-pap | max.block.ms = 60000 kafka | [2024-01-17 23:15:06,598] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.479839867Z level=info msg="Executing migration" id="create server_lock table" policy-db-migrator | -------------- policy-pap | max.in.flight.requests.per.connection = 5 kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.48074502Z level=info msg="Migration successfully executed" id="create server_lock table" duration=904.253µs policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT policy-pap | max.request.size = 1048576 kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.486459426Z level=info msg="Executing migration" id="add index server_lock.operation_uid" policy-db-migrator | -------------- policy-pap | metadata.max.age.ms = 300000 kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.487960429Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.501233ms policy-db-migrator | policy-pap | metadata.max.idle.ms = 300000 kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.49205169Z level=info msg="Executing migration" id="create user auth token table" policy-db-migrator | kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.492945804Z level=info msg="Migration successfully executed" id="create user auth token table" duration=894.424µs policy-pap | metric.reporters = [] policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.497756106Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" policy-pap | metrics.num.samples = 2 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.49869905Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=942.734µs policy-pap | metrics.recording.level = INFO policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.505992559Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" policy-pap | metrics.sample.window.ms = 30000 policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.507247418Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.253379ms policy-pap | partitioner.adaptive.partitioning.enable = true policy-db-migrator | kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.510808282Z level=info msg="Executing migration" id="add index user_auth_token.user_id" policy-pap | partitioner.availability.timeout.ms = 0 policy-db-migrator | kafka | [2024-01-17 23:15:06,601] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.512464407Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.655815ms policy-pap | partitioner.class = null policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql kafka | [2024-01-17 23:15:06,602] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.516109121Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" policy-pap | partitioner.ignore.keys = false policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.52269695Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.586649ms kafka | [2024-01-17 23:15:06,602] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | receive.buffer.bytes = 32768 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-17T23:14:27.527194788Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" kafka | [2024-01-17 23:15:06,602] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | reconnect.backoff.max.ms = 1000 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.527968459Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=772.761µs kafka | [2024-01-17 23:15:06,602] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | reconnect.backoff.ms = 50 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.53330269Z level=info msg="Executing migration" id="create cache_data table" kafka | [2024-01-17 23:15:06,602] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | request.timeout.ms = 30000 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.534583929Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.28102ms kafka | [2024-01-17 23:15:06,602] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | retries = 2147483647 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql grafana | logger=migrator t=2024-01-17T23:14:27.540378246Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" kafka | [2024-01-17 23:15:06,602] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.542219273Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.835397ms kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) policy-pap | sasl.client.callback.handler.class = null policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-17T23:14:27.547141987Z level=info msg="Executing migration" id="create short_url table v1" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.548483738Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.341111ms policy-pap | sasl.jaas.config = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.552333016Z level=info msg="Executing migration" id="add index short_url.org_id-uid" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.552989665Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=656.299µs kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-pap | sasl.kerberos.min.time.before.relogin = 60000 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql grafana | logger=migrator t=2024-01-17T23:14:27.557091197Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-pap | sasl.kerberos.service.name = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.557136948Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=46.141µs kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-17T23:14:27.560400537Z level=info msg="Executing migration" id="delete alert_definition table" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.560532299Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=132.482µs kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-pap | sasl.login.callback.handler.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.564098802Z level=info msg="Executing migration" id="recreate alert_definition table" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-pap | sasl.login.class = null policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.565293681Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.194159ms kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-pap | sasl.login.connect.timeout.ms = null policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql grafana | logger=migrator t=2024-01-17T23:14:27.56860181Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-pap | sasl.login.read.timeout.ms = null policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.569759927Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.157997ms kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-pap | sasl.login.refresh.buffer.seconds = 300 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT grafana | logger=migrator t=2024-01-17T23:14:27.574964585Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-pap | sasl.login.refresh.min.period.seconds = 60 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.57594434Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=988.235µs kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-pap | sasl.login.refresh.window.factor = 0.8 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.581156778Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-pap | sasl.login.refresh.window.jitter = 0.05 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:27.581220189Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=63.911µs kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-pap | sasl.login.retry.backoff.max.ms = 10000 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql grafana | logger=migrator t=2024-01-17T23:14:27.584477278Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-pap | sasl.login.retry.backoff.ms = 100 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:27.58594758Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.467782ms kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-pap | sasl.mechanism = GSSAPI grafana | logger=migrator t=2024-01-17T23:14:27.590206875Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 grafana | logger=migrator t=2024-01-17T23:14:27.591676326Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.469241ms policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-pap | sasl.oauthbearer.expected.audience = null grafana | logger=migrator t=2024-01-17T23:14:27.595016996Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" policy-db-migrator | kafka | [2024-01-17 23:15:06,603] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-pap | sasl.oauthbearer.expected.issuer = null grafana | logger=migrator t=2024-01-17T23:14:27.596193485Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.176159ms policy-db-migrator | kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 grafana | logger=migrator t=2024-01-17T23:14:27.59992519Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" policy-db-migrator | > upgrade 0100-pdp.sql kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 grafana | logger=migrator t=2024-01-17T23:14:27.600864154Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=938.734µs policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 grafana | logger=migrator t=2024-01-17T23:14:27.604945876Z level=info msg="Executing migration" id="Add column paused in alert_definition" policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | sasl.oauthbearer.jwks.endpoint.url = null grafana | logger=migrator t=2024-01-17T23:14:27.610576781Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.630134ms policy-db-migrator | -------------- kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | sasl.oauthbearer.scope.claim.name = scope grafana | logger=migrator t=2024-01-17T23:14:27.614180644Z level=info msg="Executing migration" id="drop alert_definition table" policy-db-migrator | kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) policy-pap | sasl.oauthbearer.sub.claim.name = sub grafana | logger=migrator t=2024-01-17T23:14:27.615048567Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=866.393µs policy-db-migrator | kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) policy-pap | sasl.oauthbearer.token.endpoint.url = null grafana | logger=migrator t=2024-01-17T23:14:27.622446569Z level=info msg="Executing migration" id="delete alert_definition_version table" policy-db-migrator | > upgrade 0110-idx_tsidx1.sql kafka | [2024-01-17 23:15:06,604] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) policy-pap | security.protocol = PLAINTEXT grafana | logger=migrator t=2024-01-17T23:14:27.622629521Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=181.282µs policy-db-migrator | -------------- policy-pap | security.providers = null kafka | [2024-01-17 23:15:06,605] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) grafana | logger=migrator t=2024-01-17T23:14:27.626594481Z level=info msg="Executing migration" id="recreate alert_definition_version table" policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) policy-pap | send.buffer.bytes = 131072 kafka | [2024-01-17 23:15:06,607] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.628067093Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.471452ms policy-db-migrator | -------------- policy-pap | socket.connection.setup.timeout.max.ms = 30000 kafka | [2024-01-17 23:15:06,652] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:27.632672363Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" policy-db-migrator | policy-pap | socket.connection.setup.timeout.ms = 10000 kafka | [2024-01-17 23:15:06,667] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:27.633682047Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.007085ms policy-db-migrator | policy-pap | ssl.cipher.suites = null kafka | [2024-01-17 23:15:06,669] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:27.639388073Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] kafka | [2024-01-17 23:15:06,670] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:27.640944847Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.556064ms policy-db-migrator | -------------- policy-pap | ssl.endpoint.identification.algorithm = https kafka | [2024-01-17 23:15:06,672] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.644801314Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY policy-pap | ssl.engine.factory.class = null kafka | [2024-01-17 23:15:06,783] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:27.644906126Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=106.912µs policy-db-migrator | -------------- policy-pap | ssl.key.password = null kafka | [2024-01-17 23:15:06,784] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:27.648847075Z level=info msg="Executing migration" id="drop alert_definition_version table" policy-db-migrator | policy-pap | ssl.keymanager.algorithm = SunX509 kafka | [2024-01-17 23:15:06,784] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:27.649958342Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.109007ms policy-db-migrator | policy-pap | ssl.keystore.certificate.chain = null kafka | [2024-01-17 23:15:06,784] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:27.782966419Z level=info msg="Executing migration" id="create alert_instance table" policy-db-migrator | > upgrade 0130-pdpstatistics.sql policy-pap | ssl.keystore.key = null kafka | [2024-01-17 23:15:06,784] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:27.785247274Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=2.284085ms policy-db-migrator | -------------- policy-pap | ssl.keystore.location = null kafka | [2024-01-17 23:15:06,830] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:28.623705417Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL policy-pap | ssl.keystore.password = null kafka | [2024-01-17 23:15:06,831] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:28.625627536Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.92476ms policy-db-migrator | -------------- policy-pap | ssl.keystore.type = JKS kafka | [2024-01-17 23:15:06,831] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:28.801926114Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" policy-db-migrator | policy-pap | ssl.protocol = TLSv1.3 kafka | [2024-01-17 23:15:06,831] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:28.80368238Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.756976ms policy-db-migrator | policy-pap | ssl.provider = null kafka | [2024-01-17 23:15:06,831] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:28.919010202Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql policy-pap | ssl.secure.random.implementation = null kafka | [2024-01-17 23:15:06,933] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:28.930110379Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=11.098367ms policy-db-migrator | -------------- policy-pap | ssl.trustmanager.algorithm = PKIX kafka | [2024-01-17 23:15:06,934] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:29.007179606Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num policy-pap | ssl.truststore.certificates = null kafka | [2024-01-17 23:15:06,934] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:29.009341149Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=2.161203ms policy-pap | ssl.truststore.location = null kafka | [2024-01-17 23:15:06,934] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:29.061945689Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" policy-pap | ssl.truststore.password = null kafka | [2024-01-17 23:15:06,935] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:29.064009521Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=2.059401ms policy-pap | ssl.truststore.type = JKS kafka | [2024-01-17 23:15:06,985] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) grafana | logger=migrator t=2024-01-17T23:14:29.177571376Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" policy-pap | transaction.timeout.ms = 60000 kafka | [2024-01-17 23:15:06,986] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:29.224316048Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=46.729792ms policy-pap | transactional.id = null kafka | [2024-01-17 23:15:06,987] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:29.263391645Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer kafka | [2024-01-17 23:15:06,987] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:29.301088061Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.693626ms policy-pap | kafka | [2024-01-17 23:15:06,987] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0150-pdpstatistics.sql grafana | logger=migrator t=2024-01-17T23:14:29.474538816Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" policy-pap | [2024-01-17T23:15:05.055+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. kafka | [2024-01-17 23:15:07,026] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:29.476691568Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=2.150842ms policy-pap | [2024-01-17T23:15:05.057+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.0 kafka | [2024-01-17 23:15:07,026] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL grafana | logger=migrator t=2024-01-17T23:14:29.574330835Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" policy-pap | [2024-01-17T23:15:05.057+00:00|INFO|AppInfoParser|main] Kafka commitId: 60e845626d8a465a kafka | [2024-01-17 23:15:07,026] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:29.57603698Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.708525ms policy-pap | [2024-01-17T23:15:05.057+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1705533305057 kafka | [2024-01-17 23:15:07,026] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:29.751529156Z level=info msg="Executing migration" id="add current_reason column related to current_state" policy-pap | [2024-01-17T23:15:05.057+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=baac300c-bb13-4a62-87c2-50f6437f4257, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:29.761866682Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=10.341296ms kafka | [2024-01-17 23:15:07,026] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:05.057+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-17T23:14:30.165777094Z level=info msg="Executing migration" id="create alert_rule table" kafka | [2024-01-17 23:15:07,392] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:05.057+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:30.166929303Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.155749ms kafka | [2024-01-17 23:15:07,393] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:05.059+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME grafana | logger=migrator t=2024-01-17T23:14:30.223254107Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" kafka | [2024-01-17 23:15:07,393] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.059+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:30.225466444Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=2.213448ms kafka | [2024-01-17 23:15:07,393] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.060+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:30.646949049Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" kafka | [2024-01-17 23:15:07,393] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:05.060+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:30.649441581Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=2.497812ms kafka | [2024-01-17 23:15:07,472] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:05.060+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-17T23:14:30.724022913Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" kafka | [2024-01-17 23:15:07,473] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:05.060+00:00|INFO|TimerManager|Thread-9] timer manager update started policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:30.72620405Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.183597ms kafka | [2024-01-17 23:15:07,473] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.060+00:00|INFO|TimerManager|Thread-10] timer manager state-change started policy-db-migrator | UPDATE jpapdpstatistics_enginestats a grafana | logger=migrator t=2024-01-17T23:14:30.784482527Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" kafka | [2024-01-17 23:15:07,473] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.061+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer policy-db-migrator | JOIN pdpstatistics b grafana | logger=migrator t=2024-01-17T23:14:30.78464225Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=164.823µs kafka | [2024-01-17 23:15:07,473] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:05.065+00:00|INFO|ServiceManager|main] Policy PAP started policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp grafana | logger=migrator t=2024-01-17T23:14:30.927579759Z level=info msg="Executing migration" id="add column for to alert_rule" kafka | [2024-01-17 23:15:07,522] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:05.066+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.387 seconds (process running for 11.986) policy-db-migrator | SET a.id = b.id grafana | logger=migrator t=2024-01-17T23:14:30.937256922Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.681823ms kafka | [2024-01-17 23:15:07,522] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:05.578+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: TCpMGCYeSECduTbHgcA3wg policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:31.005538609Z level=info msg="Executing migration" id="add column annotations to alert_rule" kafka | [2024-01-17 23:15:07,522] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.581+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TCpMGCYeSECduTbHgcA3wg policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:31.009728419Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.1882ms kafka | [2024-01-17 23:15:07,522] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.588+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:31.097584866Z level=info msg="Executing migration" id="add column labels to alert_rule" kafka | [2024-01-17 23:15:07,523] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:05.588+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: TCpMGCYeSECduTbHgcA3wg policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-17T23:14:31.106262102Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=8.681866ms kafka | [2024-01-17 23:15:07,556] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:05.702+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:31.172936012Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" kafka | [2024-01-17 23:15:07,557] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:05.744+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp grafana | logger=migrator t=2024-01-17T23:14:31.175128369Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=2.194907ms kafka | [2024-01-17 23:15:07,557] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.748+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:31.219735069Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" kafka | [2024-01-17 23:15:07,557] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:05.815+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 5 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:31.220647954Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=915.205µs kafka | [2024-01-17 23:15:07,557] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:05.879+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:31.427859036Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" kafka | [2024-01-17 23:15:07,705] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:05.879+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Cluster ID: TCpMGCYeSECduTbHgcA3wg policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql grafana | logger=migrator t=2024-01-17T23:14:31.434073212Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.214575ms kafka | [2024-01-17 23:15:07,706] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:05.951+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:31.710948674Z level=info msg="Executing migration" id="add panel_id column to alert_rule" kafka | [2024-01-17 23:15:07,706] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) policy-pap | [2024-01-17T23:15:06.062+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:31.71721215Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.267516ms kafka | [2024-01-17 23:15:07,707] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:06.082+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.043898311Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" kafka | [2024-01-17 23:15:07,707] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | policy-pap | [2024-01-17T23:15:06.185+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:32.046169239Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.273099ms kafka | [2024-01-17 23:15:07,912] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-17T23:15:06.363+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.144667054Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" kafka | [2024-01-17 23:15:07,914] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql policy-pap | [2024-01-17T23:15:06.490+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.15450828Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.845286ms kafka | [2024-01-17 23:15:07,914] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:06.526+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.287619917Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" kafka | [2024-01-17 23:15:07,914] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) policy-pap | [2024-01-17T23:15:06.598+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.311267995Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=23.649948ms kafka | [2024-01-17 23:15:07,914] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:06.653+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.577382778Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" kafka | [2024-01-17 23:15:08,757] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | policy-pap | [2024-01-17T23:15:06.709+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.577688653Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=312.575µs kafka | [2024-01-17 23:15:08,758] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | policy-pap | [2024-01-17T23:15:06.758+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.836880119Z level=info msg="Executing migration" id="create alert_rule_version table" kafka | [2024-01-17 23:15:08,758] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) policy-db-migrator | > upgrade 0210-sequence.sql policy-pap | [2024-01-17T23:15:06.822+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.838352284Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.473115ms kafka | [2024-01-17 23:15:08,759] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:06.863+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.846738175Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" kafka | [2024-01-17 23:15:08,759] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) policy-pap | [2024-01-17T23:15:06.928+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:32.848109667Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.373372ms policy-db-migrator | -------------- kafka | [2024-01-17 23:15:08,841] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:06.979+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:33.030228218Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" policy-db-migrator | kafka | [2024-01-17 23:15:08,842] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:07.040+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.031592112Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.366464ms policy-db-migrator | kafka | [2024-01-17 23:15:08,842] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:07.093+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.075067642Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" policy-db-migrator | > upgrade 0220-sequence.sql kafka | [2024-01-17 23:15:08,842] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:07.147+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.075200584Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=135.362µs policy-db-migrator | -------------- kafka | [2024-01-17 23:15:08,842] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:07.202+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.089330943Z level=info msg="Executing migration" id="add column for to alert_rule_version" policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) kafka | [2024-01-17 23:15:09,265] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:07.252+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 22 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.100075282Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=10.74397ms policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,265] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:07.304+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:33.15103154Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" policy-db-migrator | kafka | [2024-01-17 23:15:09,266] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:07.358+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 24 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.157439447Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.409617ms policy-db-migrator | kafka | [2024-01-17 23:15:09,266] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:07.415+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:33.299554956Z level=info msg="Executing migration" id="add column labels to alert_rule_version" policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql kafka | [2024-01-17 23:15:09,266] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:07.463+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 26 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.311629029Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=12.079312ms policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,277] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:07.542+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.343597605Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) kafka | [2024-01-17 23:15:09,278] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:07.567+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 28 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.349924993Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.327167ms policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,278] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:07.649+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.484040146Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" policy-db-migrator | kafka | [2024-01-17 23:15:09,278] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:07.682+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 30 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.494451471Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=10.419605ms policy-db-migrator | kafka | [2024-01-17 23:15:09,279] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:07.754+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.616058706Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql kafka | [2024-01-17 23:15:09,356] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:07.787+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 32 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:33.616225738Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=178.433µs policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,357] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:07.890+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 34 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} grafana | logger=migrator t=2024-01-17T23:14:33.896973477Z level=info msg="Executing migration" id=create_alert_configuration_table policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) kafka | [2024-01-17 23:15:09,357] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:33.898892269Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.922002ms policy-pap | [2024-01-17T23:15:07.892+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,358] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:33.948667995Z level=info msg="Executing migration" id="Add column default in alert_configuration" policy-pap | [2024-01-17T23:15:07.994+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | kafka | [2024-01-17 23:15:09,358] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:33.957929412Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.270877ms policy-pap | [2024-01-17T23:15:07.994+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 36 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,396] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:34.242336391Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" policy-db-migrator | policy-pap | [2024-01-17T23:15:08.097+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 38 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-17 23:15:09,397] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:34.242529014Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=197.923µs policy-db-migrator | > upgrade 0120-toscatrigger.sql policy-pap | [2024-01-17T23:15:08.101+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,397] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.319651671Z level=info msg="Executing migration" id="add column org_id in alert_configuration" policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:08.202+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 40 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,397] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.327329519Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.675978ms policy-db-migrator | DROP TABLE IF EXISTS toscatrigger policy-pap | [2024-01-17T23:15:08.202+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-17 23:15:09,398] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.375382738Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:08.307+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} kafka | [2024-01-17 23:15:09,426] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:34.376276883Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=896.065µs policy-db-migrator | policy-pap | [2024-01-17T23:15:08.313+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 42 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,427] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:34.381850016Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" policy-db-migrator | policy-pap | [2024-01-17T23:15:08.411+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,427] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.386381203Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.530877ms policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql policy-pap | [2024-01-17T23:15:08.418+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 44 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,427] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.492683939Z level=info msg="Executing migration" id=create_ngalert_configuration_table policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:08.516+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,428] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.494056502Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.376444ms policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB policy-pap | [2024-01-17T23:15:08.522+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 46 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,494] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:34.545141861Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:08.620+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,495] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:34.546914371Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.77241ms policy-db-migrator | policy-pap | [2024-01-17T23:15:08.627+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 48 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,495] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.586932993Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" policy-db-migrator | policy-pap | [2024-01-17T23:15:08.724+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,495] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.596608916Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.678023ms policy-db-migrator | > upgrade 0140-toscaparameter.sql policy-pap | [2024-01-17T23:15:08.731+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 50 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,496] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.672199547Z level=info msg="Executing migration" id="create provenance_type table" policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:08.828+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.673453757Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.25407ms kafka | [2024-01-17 23:15:09,575] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE IF EXISTS toscaparameter policy-pap | [2024-01-17T23:15:08.835+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 52 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.70629074Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" kafka | [2024-01-17 23:15:09,576] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:08.932+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.707881976Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.590906ms kafka | [2024-01-17 23:15:09,576] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-17T23:15:08.937+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 54 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.736585769Z level=info msg="Executing migration" id="create alert_image table" kafka | [2024-01-17 23:15:09,576] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-17T23:15:09.040+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.738172935Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.591426ms kafka | [2024-01-17 23:15:09,577] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | > upgrade 0150-toscaproperty.sql policy-pap | [2024-01-17T23:15:09.042+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 56 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.741602392Z level=info msg="Executing migration" id="add unique index on token to alert_image table" kafka | [2024-01-17 23:15:09,726] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:09.143+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.743428474Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.826261ms kafka | [2024-01-17 23:15:09,727] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints policy-pap | [2024-01-17T23:15:09.147+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 58 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.747169686Z level=info msg="Executing migration" id="support longer URLs in alert_image table" kafka | [2024-01-17 23:15:09,727] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:09.245+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.747240857Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=74.221µs kafka | [2024-01-17 23:15:09,728] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | policy-pap | [2024-01-17T23:15:09.256+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 60 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.751858335Z level=info msg="Executing migration" id=create_alert_configuration_history_table kafka | [2024-01-17 23:15:09,728] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:09.349+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.75273525Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=876.005µs kafka | [2024-01-17 23:15:09,815] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata policy-pap | [2024-01-17T23:15:09.361+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 62 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} grafana | logger=migrator t=2024-01-17T23:14:34.755845532Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" kafka | [2024-01-17 23:15:09,817] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:09.455+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,817] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.75747963Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.633318ms policy-db-migrator | policy-pap | [2024-01-17T23:15:09.464+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 64 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} kafka | [2024-01-17 23:15:09,817] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.761897884Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" policy-pap | [2024-01-17T23:15:09.561+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,818] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.762570725Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" policy-pap | [2024-01-17T23:15:09.567+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 66 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | DROP TABLE IF EXISTS toscaproperty kafka | [2024-01-17 23:15:09,851] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:34.771044988Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" policy-pap | [2024-01-17T23:15:09.665+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,852] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:34.771443404Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=398.256µs policy-pap | [2024-01-17T23:15:09.670+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 68 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:15:09,852] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.77475576Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" policy-pap | [2024-01-17T23:15:09.767+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:15:09,852] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.776332887Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.576537ms policy-pap | [2024-01-17T23:15:09.776+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 70 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql kafka | [2024-01-17 23:15:09,852] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.780432715Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" policy-pap | [2024-01-17T23:15:09.873+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,873] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:34.788133925Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.70016ms policy-pap | [2024-01-17T23:15:09.893+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 72 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY kafka | [2024-01-17 23:15:09,875] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:34.79255385Z level=info msg="Executing migration" id="create library_element table v1" policy-pap | [2024-01-17T23:15:09.977+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,875] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.793144679Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=590.659µs policy-pap | [2024-01-17T23:15:09.997+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 74 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:15:09,875] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.799471355Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" policy-pap | [2024-01-17T23:15:10.082+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,876] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.801225275Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.75344ms policy-pap | [2024-01-17T23:15:10.102+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 76 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) kafka | [2024-01-17 23:15:09,908] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:34.807740584Z level=info msg="Executing migration" id="create library_element_connection table v1" policy-pap | [2024-01-17T23:15:10.186+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:09,909] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:34.808540058Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=799.274µs policy-pap | [2024-01-17T23:15:10.204+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 78 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:15:09,910] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.812931512Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" policy-pap | [2024-01-17T23:15:10.304+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:15:09,910] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.814970836Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.038594ms policy-pap | [2024-01-17T23:15:10.307+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 80 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql kafka | [2024-01-17 23:15:09,910] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.818949353Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" policy-pap | [2024-01-17T23:15:10.408+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,017] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:34.820298855Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.350842ms policy-pap | [2024-01-17T23:15:10.410+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 82 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY kafka | [2024-01-17 23:15:10,018] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:34.82472519Z level=info msg="Executing migration" id="increase max description length to 2048" policy-pap | [2024-01-17T23:15:10.512+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 84 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,018] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.824847602Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=122.272µs policy-pap | [2024-01-17T23:15:10.513+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:15:10,018] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:34.829359708Z level=info msg="Executing migration" id="alter library_element model to mediumtext" policy-pap | [2024-01-17T23:15:10.615+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 86 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,018] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:34.829521181Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=161.533µs policy-pap | [2024-01-17T23:15:10.616+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) grafana | logger=migrator t=2024-01-17T23:14:34.832770845Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" kafka | [2024-01-17 23:15:10,043] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:10.717+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 88 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:34.833268623Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=497.888µs kafka | [2024-01-17 23:15:10,044] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:10.719+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:34.836471147Z level=info msg="Executing migration" id="create data_keys table" kafka | [2024-01-17 23:15:10,044] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:10.821+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 90 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:34.838199647Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.726659ms kafka | [2024-01-17 23:15:10,045] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:10.823+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql grafana | logger=migrator t=2024-01-17T23:14:34.845417797Z level=info msg="Executing migration" id="create secrets table" kafka | [2024-01-17 23:15:10,045] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:10.928+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:34.846245182Z level=info msg="Migration successfully executed" id="create secrets table" duration=827.555µs kafka | [2024-01-17 23:15:10,271] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:10.929+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 92 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT grafana | logger=migrator t=2024-01-17T23:14:34.850863839Z level=info msg="Executing migration" id="rename data_keys name column to id" kafka | [2024-01-17 23:15:10,272] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:11.031+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:34.898101583Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=47.223464ms kafka | [2024-01-17 23:15:10,272] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.034+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 94 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:34.901667563Z level=info msg="Executing migration" id="add name column into data_keys" kafka | [2024-01-17 23:15:10,273] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.134+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:34.911682521Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=10.012308ms kafka | [2024-01-17 23:15:10,273] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:11.143+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 96 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0100-upgrade.sql grafana | logger=migrator t=2024-01-17T23:14:34.915639977Z level=info msg="Executing migration" id="copy data_keys id column values into name" kafka | [2024-01-17 23:15:10,377] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:11.235+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:34.915962474Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=323.247µs kafka | [2024-01-17 23:15:10,378] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:11.245+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 98 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | select 'upgrade to 1100 completed' as msg grafana | logger=migrator t=2024-01-17T23:14:34.920712393Z level=info msg="Executing migration" id="rename data_keys name column to label" kafka | [2024-01-17 23:15:10,379] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.338+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:34.967146564Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=46.432301ms kafka | [2024-01-17 23:15:10,379] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.348+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 100 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.037764871Z level=info msg="Executing migration" id="rename data_keys id column back to name" kafka | [2024-01-17 23:15:10,379] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:11.439+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | msg grafana | logger=migrator t=2024-01-17T23:14:35.089013602Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=51.249112ms kafka | [2024-01-17 23:15:10,429] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:11.452+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 102 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | upgrade to 1100 completed grafana | logger=migrator t=2024-01-17T23:14:35.094063277Z level=info msg="Executing migration" id="create kv_store table v1" kafka | [2024-01-17 23:15:10,430] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:11.542+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.094825979Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=762.852µs kafka | [2024-01-17 23:15:10,430] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.555+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 104 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql grafana | logger=migrator t=2024-01-17T23:14:35.099457507Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" kafka | [2024-01-17 23:15:10,430] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.646+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.101285128Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.830961ms kafka | [2024-01-17 23:15:10,431] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:11.657+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 106 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME grafana | logger=migrator t=2024-01-17T23:14:35.105321416Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" kafka | [2024-01-17 23:15:10,486] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:11.748+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.105865505Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=544.809µs kafka | [2024-01-17 23:15:10,487] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:11.760+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 108 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.113739917Z level=info msg="Executing migration" id="create permission table" kafka | [2024-01-17 23:15:10,487] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.861+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 110 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.114915888Z level=info msg="Migration successfully executed" id="create permission table" duration=1.17015ms kafka | [2024-01-17 23:15:10,487] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:11.881+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | > upgrade 0110-idx_tsidx1.sql grafana | logger=migrator t=2024-01-17T23:14:35.122729958Z level=info msg="Executing migration" id="add unique index permission.role_id" kafka | [2024-01-17 23:15:10,487] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:11.972+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 112 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.124824234Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.098136ms kafka | [2024-01-17 23:15:10,501] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:11.984+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-01-17T23:14:35.130157023Z level=info msg="Executing migration" id="add unique index role_id_action_scope" kafka | [2024-01-17 23:15:10,502] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:12.072+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 114 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,502] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.131349643Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.19209ms policy-pap | [2024-01-17T23:15:12.087+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | kafka | [2024-01-17 23:15:10,502] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.134847023Z level=info msg="Executing migration" id="create role table" policy-pap | [2024-01-17T23:15:12.174+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Error while fetching metadata with correlation id 116 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,503] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.135824509Z level=info msg="Migration successfully executed" id="create role table" duration=976.776µs policy-pap | [2024-01-17T23:15:12.191+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 118 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) kafka | [2024-01-17 23:15:10,518] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:35.140519188Z level=info msg="Executing migration" id="add column display_name" policy-pap | [2024-01-17T23:15:12.280+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,518] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:35.148792717Z level=info msg="Migration successfully executed" id="add column display_name" duration=8.270089ms policy-pap | [2024-01-17T23:15:12.292+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] (Re-)joining group policy-db-migrator | kafka | [2024-01-17 23:15:10,519] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.156422055Z level=info msg="Executing migration" id="add column group_name" policy-pap | [2024-01-17T23:15:12.295+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) policy-db-migrator | kafka | [2024-01-17 23:15:10,519] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.168312415Z level=info msg="Migration successfully executed" id="add column group_name" duration=11.890921ms policy-pap | [2024-01-17T23:15:12.300+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | > upgrade 0120-audit_sequence.sql kafka | [2024-01-17 23:15:10,519] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.171415457Z level=info msg="Executing migration" id="add index role.org_id" policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,684] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:12.326+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Request joining group due to: need to re-join with the given member-id: consumer-093ff4e0-f365-4742-90a8-254a3129a143-3-7e258dba-fe5e-4ce8-aa02-aab1610b4ec3 grafana | logger=migrator t=2024-01-17T23:14:35.172159799Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=744.372µs policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) kafka | [2024-01-17 23:15:10,685] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:35.176685516Z level=info msg="Executing migration" id="add unique index role_org_id_name" policy-pap | [2024-01-17T23:15:12.326+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | -------------- kafka | [2024-01-17 23:15:10,685] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.177746413Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.062357ms policy-pap | [2024-01-17T23:15:12.326+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] (Re-)joining group policy-db-migrator | kafka | [2024-01-17 23:15:10,686] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:12.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-752789c8-84e9-4da5-b6a8-9ea9c666d1f8 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.181012928Z level=info msg="Executing migration" id="add index role_org_id_uid" kafka | [2024-01-17 23:15:10,686] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(TB_lmqBXRfuqVYs70rfOKA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:12.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) grafana | logger=migrator t=2024-01-17T23:14:35.182156098Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.14049ms kafka | [2024-01-17 23:15:10,760] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:12.327+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.185594065Z level=info msg="Executing migration" id="create team role table" kafka | [2024-01-17 23:15:10,761] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:14.497+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.186391929Z level=info msg="Migration successfully executed" id="create team role table" duration=797.213µs kafka | [2024-01-17 23:15:10,761] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:14.497+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.192369799Z level=info msg="Executing migration" id="add index team_role.org_id" kafka | [2024-01-17 23:15:10,762] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:14.499+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 2 ms policy-db-migrator | > upgrade 0130-statistics_sequence.sql grafana | logger=migrator t=2024-01-17T23:14:35.194040927Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.669818ms kafka | [2024-01-17 23:15:10,762] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:15.352+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-752789c8-84e9-4da5-b6a8-9ea9c666d1f8', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.202652082Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" kafka | [2024-01-17 23:15:10,792] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:15.355+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Successfully joined group with generation Generation{generationId=1, memberId='consumer-093ff4e0-f365-4742-90a8-254a3129a143-3-7e258dba-fe5e-4ce8-aa02-aab1610b4ec3', protocol='range'} policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) grafana | logger=migrator t=2024-01-17T23:14:35.204615035Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.962513ms kafka | [2024-01-17 23:15:10,793] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:15.360+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-752789c8-84e9-4da5-b6a8-9ea9c666d1f8=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.208100654Z level=info msg="Executing migration" id="add index team_role.team_id" kafka | [2024-01-17 23:15:10,793] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:15.360+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Finished assignment for group at generation 1: {consumer-093ff4e0-f365-4742-90a8-254a3129a143-3-7e258dba-fe5e-4ce8-aa02-aab1610b4ec3=Assignment(partitions=[policy-pdp-pap-0])} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.210207119Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.105205ms kafka | [2024-01-17 23:15:10,793] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:15.391+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Successfully synced group in generation Generation{generationId=1, memberId='consumer-093ff4e0-f365-4742-90a8-254a3129a143-3-7e258dba-fe5e-4ce8-aa02-aab1610b4ec3', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.215005179Z level=info msg="Executing migration" id="create user role table" kafka | [2024-01-17 23:15:10,793] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:15.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) grafana | logger=migrator t=2024-01-17T23:14:35.215735612Z level=info msg="Migration successfully executed" id="create user role table" duration=730.513µs kafka | [2024-01-17 23:15:10,826] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:15.392+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-752789c8-84e9-4da5-b6a8-9ea9c666d1f8', protocol='range'} policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.220292338Z level=info msg="Executing migration" id="add index user_role.org_id" kafka | [2024-01-17 23:15:10,827] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:15.393+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.221917656Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.624988ms kafka | [2024-01-17 23:15:10,827] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:15.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.225480596Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" kafka | [2024-01-17 23:15:10,827] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:15.398+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 policy-db-migrator | TRUNCATE TABLE sequence grafana | logger=migrator t=2024-01-17T23:14:35.227429399Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.949084ms kafka | [2024-01-17 23:15:10,827] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:15.413+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.233837136Z level=info msg="Executing migration" id="add index user_role.user_id" kafka | [2024-01-17 23:15:10,866] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:15.414+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Found no committed offset for partition policy-pdp-pap-0 policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.236931888Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=3.098682ms kafka | [2024-01-17 23:15:10,866] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:15.432+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-093ff4e0-f365-4742-90a8-254a3129a143-3, groupId=093ff4e0-f365-4742-90a8-254a3129a143] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.24775128Z level=info msg="Executing migration" id="create builtin role table" kafka | [2024-01-17 23:15:10,866] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:15.432+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. policy-db-migrator | > upgrade 0100-pdpstatistics.sql grafana | logger=migrator t=2024-01-17T23:14:35.24836721Z level=info msg="Migration successfully executed" id="create builtin role table" duration=615.69µs kafka | [2024-01-17 23:15:10,867] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:26.418+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.251101146Z level=info msg="Executing migration" id="add index builtin_role.role_id" kafka | [2024-01-17 23:15:10,867] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [] policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics grafana | logger=migrator t=2024-01-17T23:14:35.252279136Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.17804ms kafka | [2024-01-17 23:15:10,971] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:26.419+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.256942194Z level=info msg="Executing migration" id="add index builtin_role.name" kafka | [2024-01-17 23:15:10,972] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"354ee0e2-4bc6-4c3c-be39-81f934f0f052","timestampMs":1705533326383,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.258223236Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.280952ms kafka | [2024-01-17 23:15:10,972] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:26.424+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.2608464Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" kafka | [2024-01-17 23:15:10,972] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"354ee0e2-4bc6-4c3c-be39-81f934f0f052","timestampMs":1705533326383,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} policy-db-migrator | DROP TABLE pdpstatistics grafana | logger=migrator t=2024-01-17T23:14:35.26858352Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=7.73883ms kafka | [2024-01-17 23:15:10,973] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:26.429+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.273066315Z level=info msg="Executing migration" id="add index builtin_role.org_id" kafka | [2024-01-17 23:15:11,037] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:26.517+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.274163853Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.100438ms kafka | [2024-01-17 23:15:11,038] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-pap | [2024-01-17T23:15:26.517+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting listener policy-db-migrator | grafana | logger=migrator t=2024-01-17T23:14:35.277177505Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" kafka | [2024-01-17 23:15:11,038] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:26.517+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting timer policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql grafana | logger=migrator t=2024-01-17T23:14:35.278286753Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.109288ms kafka | [2024-01-17 23:15:11,038] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) policy-pap | [2024-01-17T23:15:26.518+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=fbd43c14-a4e8-4077-b4cc-19a57a79f4ce, expireMs=1705533356518] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.282166388Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" kafka | [2024-01-17 23:15:11,038] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-pap | [2024-01-17T23:15:26.520+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting enqueue policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats grafana | logger=migrator t=2024-01-17T23:14:35.283243777Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.077099ms kafka | [2024-01-17 23:15:11,121] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-pap | [2024-01-17T23:15:26.520+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=fbd43c14-a4e8-4077-b4cc-19a57a79f4ce, expireMs=1705533356518] policy-db-migrator | -------------- grafana | logger=migrator t=2024-01-17T23:14:35.286570263Z level=info msg="Executing migration" id="add unique index role.uid" policy-pap | [2024-01-17T23:15:26.520+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate started policy-db-migrator | kafka | [2024-01-17 23:15:11,122] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:35.287712941Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.142409ms policy-db-migrator | policy-pap | [2024-01-17T23:15:26.522+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-01-17 23:15:11,122] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.293698982Z level=info msg="Executing migration" id="create seed assignment table" policy-db-migrator | > upgrade 0120-statistics_sequence.sql policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","timestampMs":1705533326500,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,122] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.294617018Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=915.626µs policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:26.552+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-17 23:15:11,122] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.304133247Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" policy-db-migrator | DROP TABLE statistics_sequence policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","timestampMs":1705533326500,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,169] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:35.306086561Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.953124ms policy-db-migrator | -------------- policy-pap | [2024-01-17T23:15:26.553+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE kafka | [2024-01-17 23:15:11,169] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:35.309177832Z level=info msg="Executing migration" id="add column hidden to role table" policy-db-migrator | policy-pap | [2024-01-17T23:15:26.554+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-17 23:15:11,169] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.31795222Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.775238ms policy-db-migrator | policyadmin: OK: upgrade (1300) policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","timestampMs":1705533326500,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,169] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.320769277Z level=info msg="Executing migration" id="permission kind migration" policy-db-migrator | name version policy-pap | [2024-01-17T23:15:26.558+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE kafka | [2024-01-17 23:15:11,170] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.326435202Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.665655ms policy-db-migrator | policyadmin 1300 policy-pap | [2024-01-17T23:15:26.573+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-17 23:15:11,204] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:35.33040291Z level=info msg="Executing migration" id="permission attribute migration" policy-db-migrator | ID script operation from_version to_version tag success atTime policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"86e417ee-ba90-4082-b783-a8cb76967993","timestampMs":1705533326562,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} kafka | [2024-01-17 23:15:11,204] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.33822161Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.81948ms policy-pap | [2024-01-17T23:15:26.573+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-17 23:15:11,204] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.344054388Z level=info msg="Executing migration" id="permission identifier migration" policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"86e417ee-ba90-4082-b783-a8cb76967993","timestampMs":1705533326562,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup"} kafka | [2024-01-17 23:15:11,204] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.351787318Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.73216ms policy-pap | [2024-01-17T23:15:26.574+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus kafka | [2024-01-17 23:15:11,205] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.355640184Z level=info msg="Executing migration" id="add permission identifier index" policy-pap | [2024-01-17T23:15:26.584+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-17 23:15:11,319] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.356449067Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=805.943µs policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d6679e7c-c185-41fd-b2b2-bbf8515deade","timestampMs":1705533326564,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,320] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.36138268Z level=info msg="Executing migration" id="create query_history table v1" policy-pap | [2024-01-17T23:15:26.608+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping kafka | [2024-01-17 23:15:11,320] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.362801444Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.417233ms policy-pap | [2024-01-17T23:15:26.609+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping enqueue kafka | [2024-01-17 23:15:11,320] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.366243641Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" policy-pap | [2024-01-17T23:15:26.609+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping timer kafka | [2024-01-17 23:15:11,320] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.368030882Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.786521ms policy-pap | [2024-01-17T23:15:26.609+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=fbd43c14-a4e8-4077-b4cc-19a57a79f4ce, expireMs=1705533356518] kafka | [2024-01-17 23:15:11,352] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.373514354Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" policy-pap | [2024-01-17T23:15:26.609+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping listener kafka | [2024-01-17 23:15:11,353] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.373603385Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=89.431µs policy-pap | [2024-01-17T23:15:26.609+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopped kafka | [2024-01-17 23:15:11,353] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.379956062Z level=info msg="Executing migration" id="rbac disabled migrator" policy-pap | [2024-01-17T23:15:26.612+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] kafka | [2024-01-17 23:15:11,353] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.380033253Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=78.231µs policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"fbd43c14-a4e8-4077-b4cc-19a57a79f4ce","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"d6679e7c-c185-41fd-b2b2-bbf8515deade","timestampMs":1705533326564,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,353] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.382648197Z level=info msg="Executing migration" id="teams permissions migration" policy-pap | [2024-01-17T23:15:26.613+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id fbd43c14-a4e8-4077-b4cc-19a57a79f4ce kafka | [2024-01-17 23:15:11,412] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.38344419Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=795.963µs policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate successful kafka | [2024-01-17 23:15:11,413] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.387073012Z level=info msg="Executing migration" id="dashboard permissions" policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b start publishing next request kafka | [2024-01-17 23:15:11,413] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.387910436Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=839.094µs policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange starting kafka | [2024-01-17 23:15:11,413] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.392375121Z level=info msg="Executing migration" id="dashboard permissions uid scopes" policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange starting listener kafka | [2024-01-17 23:15:11,413] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:26 grafana | logger=migrator t=2024-01-17T23:14:35.393057412Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=682.311µs policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange starting timer kafka | [2024-01-17 23:15:11,489] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 grafana | logger=migrator t=2024-01-17T23:14:35.398493854Z level=info msg="Executing migration" id="drop managed folder create actions" policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=c14afc04-dda2-444d-acd0-3073b0ca56f2, expireMs=1705533356616] kafka | [2024-01-17 23:15:11,490] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 grafana | logger=migrator t=2024-01-17T23:14:35.398701757Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=208.443µs policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange starting enqueue kafka | [2024-01-17 23:15:11,490] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 grafana | logger=migrator t=2024-01-17T23:14:35.402072524Z level=info msg="Executing migration" id="alerting notification permissions" policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange started kafka | [2024-01-17 23:15:11,490] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 grafana | logger=migrator t=2024-01-17T23:14:35.402414319Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=343.135µs policy-pap | [2024-01-17T23:15:26.616+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=c14afc04-dda2-444d-acd0-3073b0ca56f2, expireMs=1705533356616] kafka | [2024-01-17 23:15:11,490] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 grafana | logger=migrator t=2024-01-17T23:14:35.405618383Z level=info msg="Executing migration" id="create query_history_star table v1" policy-pap | [2024-01-17T23:15:26.617+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] kafka | [2024-01-17 23:15:11,599] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 grafana | logger=migrator t=2024-01-17T23:14:35.406855444Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.234731ms policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c14afc04-dda2-444d-acd0-3073b0ca56f2","timestampMs":1705533326501,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,599] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 grafana | logger=migrator t=2024-01-17T23:14:35.448911791Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" policy-pap | [2024-01-17T23:15:26.627+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 kafka | [2024-01-17 23:15:11,599] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.451039757Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.127706ms policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c14afc04-dda2-444d-acd0-3073b0ca56f2","timestampMs":1705533326501,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 kafka | [2024-01-17 23:15:11,599] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.455766746Z level=info msg="Executing migration" id="add column org_id in query_history_star" policy-pap | [2024-01-17T23:15:26.628+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 kafka | [2024-01-17 23:15:11,599] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.465066443Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.300347ms policy-pap | [2024-01-17T23:15:26.640+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 kafka | [2024-01-17 23:15:11,669] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:35.468236116Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c14afc04-dda2-444d-acd0-3073b0ca56f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"226abeb0-4de8-4f87-ac6f-32138fbc9058","timestampMs":1705533326629,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 kafka | [2024-01-17 23:15:11,669] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:35.468296557Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=60.731µs policy-pap | [2024-01-17T23:15:26.641+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c14afc04-dda2-444d-acd0-3073b0ca56f2 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 kafka | [2024-01-17 23:15:11,670] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.472279054Z level=info msg="Executing migration" id="create correlation table v1" policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 policy-pap | [2024-01-17T23:15:26.661+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-17 23:15:11,670] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.473130138Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=850.744µs policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:27 policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"c14afc04-dda2-444d-acd0-3073b0ca56f2","timestampMs":1705533326501,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,670] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.478175273Z level=info msg="Executing migration" id="add index correlations.uid" policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:28 policy-pap | [2024-01-17T23:15:26.661+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE kafka | [2024-01-17 23:15:11,850] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) grafana | logger=migrator t=2024-01-17T23:14:35.479996694Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.817451ms policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:28 policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] kafka | [2024-01-17 23:15:11,851] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) grafana | logger=migrator t=2024-01-17T23:14:35.484589511Z level=info msg="Executing migration" id="add index correlations.source_uid" policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:29 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"c14afc04-dda2-444d-acd0-3073b0ca56f2","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"226abeb0-4de8-4f87-ac6f-32138fbc9058","timestampMs":1705533326629,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:11,851] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.485630588Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.040957ms policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:29 policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange stopping kafka | [2024-01-17 23:15:11,851] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) grafana | logger=migrator t=2024-01-17T23:14:35.492125577Z level=info msg="Executing migration" id="add correlation config column" policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:29 policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange stopping enqueue kafka | [2024-01-17 23:15:11,851] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(ZZVFVp_CTPq7ZebUsmWrBQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.502054674Z level=info msg="Migration successfully executed" id="add correlation config column" duration=9.929177ms policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:29 policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange stopping timer kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.505300379Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=c14afc04-dda2-444d-acd0-3073b0ca56f2, expireMs=1705533356616] policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:30 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.506080122Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=779.123µs policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange stopping listener policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:30 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.509982688Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange stopped policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:30 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.511124287Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.142519ms policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpStateChange successful policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:31 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.516541678Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b start publishing next request policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:31 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.546530222Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=29.989304ms policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:31 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.549963069Z level=info msg="Executing migration" id="create correlation v2" policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting listener policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:32 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.550672461Z level=info msg="Migration successfully executed" id="create correlation v2" duration=708.102µs policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting timer policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:32 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.553923906Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=b9138845-cc56-497c-8dee-71da850e574b, expireMs=1705533356665] policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:33 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.555116916Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.1929ms policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate starting enqueue policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:33 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.559086303Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" policy-pap | [2024-01-17T23:15:26.665+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate started policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:33 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.560600728Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.512925ms policy-pap | [2024-01-17T23:15:26.666+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:33 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.564509124Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b9138845-cc56-497c-8dee-71da850e574b","timestampMs":1705533326653,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) policy-pap | [2024-01-17T23:15:26.672+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-17T23:14:35.566177082Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.667648ms policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b9138845-cc56-497c-8dee-71da850e574b","timestampMs":1705533326653,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-17T23:14:35.571172135Z level=info msg="Executing migration" id="copy correlation v1 to v2" policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) policy-pap | [2024-01-17T23:15:26.672+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-17T23:14:35.57140285Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=230.825µs policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) policy-pap | [2024-01-17T23:15:26.678+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-17T23:14:35.575105452Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 grafana | logger=migrator t=2024-01-17T23:14:35.575876725Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=770.973µs policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 policy-pap | {"source":"pap-c481ca0f-97e2-45bc-9615-5afa9d4237f0","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"b9138845-cc56-497c-8dee-71da850e574b","timestampMs":1705533326653,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-17T23:14:35.582087369Z level=info msg="Executing migration" id="add provisioning column" policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 policy-pap | [2024-01-17T23:15:26.679+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE grafana | logger=migrator t=2024-01-17T23:14:35.592390892Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.304423ms policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 policy-pap | [2024-01-17T23:15:26.685+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] grafana | logger=migrator t=2024-01-17T23:14:35.596775887Z level=info msg="Executing migration" id="create entity_events table" policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b9138845-cc56-497c-8dee-71da850e574b","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1096889e-4f82-44c6-9d25-d8a47fb87433","timestampMs":1705533326675,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-17T23:14:35.597518419Z level=info msg="Migration successfully executed" id="create entity_events table" duration=740.722µs policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:34 policy-pap | [2024-01-17T23:15:26.686+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping grafana | logger=migrator t=2024-01-17T23:14:35.600723413Z level=info msg="Executing migration" id="create dashboard public config v1" policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.686+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping enqueue grafana | logger=migrator t=2024-01-17T23:14:35.601636438Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=914.166µs policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.686+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping timer grafana | logger=migrator t=2024-01-17T23:14:35.605030055Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.686+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=b9138845-cc56-497c-8dee-71da850e574b, expireMs=1705533356665] grafana | logger=migrator t=2024-01-17T23:14:35.605506303Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.686+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopping listener grafana | logger=migrator t=2024-01-17T23:14:35.609462609Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.686+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate stopped grafana | logger=migrator t=2024-01-17T23:14:35.609934477Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.690+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] grafana | logger=migrator t=2024-01-17T23:14:35.613702571Z level=info msg="Executing migration" id="Drop old dashboard public config table" policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"b9138845-cc56-497c-8dee-71da850e574b","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"1096889e-4f82-44c6-9d25-d8a47fb87433","timestampMs":1705533326675,"name":"apex-7ff8679a-4a53-4eaf-beae-31cefdce632b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} grafana | logger=migrator t=2024-01-17T23:14:35.615050783Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.347922ms policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.690+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id b9138845-cc56-497c-8dee-71da850e574b grafana | logger=migrator t=2024-01-17T23:14:35.620773359Z level=info msg="Executing migration" id="recreate dashboard public config v1" policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:26.692+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b PdpUpdate successful grafana | logger=migrator t=2024-01-17T23:14:35.622240875Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.467055ms policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) policy-pap | [2024-01-17T23:15:26.692+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-7ff8679a-4a53-4eaf-beae-31cefdce632b has no more requests grafana | logger=migrator t=2024-01-17T23:14:35.627087635Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) policy-pap | [2024-01-17T23:15:35.086+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls grafana | logger=migrator t=2024-01-17T23:14:35.628827975Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.74004ms policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) policy-pap | [2024-01-17T23:15:35.094+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls grafana | logger=migrator t=2024-01-17T23:14:35.632292343Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) policy-pap | [2024-01-17T23:15:35.497+00:00|INFO|SessionData|http-nio-6969-exec-4] unknown group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.633388592Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.096149ms policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) policy-pap | [2024-01-17T23:15:36.026+00:00|INFO|SessionData|http-nio-6969-exec-4] create cached group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.636465144Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.637514771Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.050388ms policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:36.027+00:00|INFO|SessionData|http-nio-6969-exec-4] creating DB group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.642146379Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:36.561+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.643242477Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.095998ms kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 grafana | logger=migrator t=2024-01-17T23:14:35.646195557Z level=info msg="Executing migration" id="Drop public config table" kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:36.861+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 grafana | logger=migrator t=2024-01-17T23:14:35.647247665Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.050238ms kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:36.968+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=migrator t=2024-01-17T23:14:35.652975831Z level=info msg="Executing migration" id="Recreate dashboard public config v2" kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:36.968+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.654549367Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.572776ms kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:35 policy-pap | [2024-01-17T23:15:36.969+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.658560825Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:36.983+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-01-17T23:15:36Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-01-17T23:15:36Z, user=policyadmin)] grafana | logger=migrator t=2024-01-17T23:14:35.659701344Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.140309ms kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:37.663+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.666663121Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:37.664+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 grafana | logger=migrator t=2024-01-17T23:14:35.66843828Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.776409ms kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:37.664+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 grafana | logger=migrator t=2024-01-17T23:14:35.672769584Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:37.664+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.67494064Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.162636ms kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:37.664+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.678561511Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:37.676+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-17T23:15:37Z, user=policyadmin)] grafana | logger=migrator t=2024-01-17T23:14:35.707232413Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=28.670612ms kafka | [2024-01-17 23:15:12,197] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:38.049+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup grafana | logger=migrator t=2024-01-17T23:14:35.710462107Z level=info msg="Executing migration" id="add annotations_enabled column" kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:38.049+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.717897652Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.434385ms kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:38.049+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 grafana | logger=migrator t=2024-01-17T23:14:35.722739624Z level=info msg="Executing migration" id="add time_selection_enabled column" kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:38.049+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 grafana | logger=migrator t=2024-01-17T23:14:35.731270787Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.530193ms kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:38.049+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.735432607Z level=info msg="Executing migration" id="delete orphaned public dashboards" kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1701242314260800u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:38.050+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup grafana | logger=migrator t=2024-01-17T23:14:35.735745282Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=293.744µs kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:38.062+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-01-17T23:15:38Z, user=policyadmin)] grafana | logger=migrator t=2024-01-17T23:14:35.740441271Z level=info msg="Executing migration" id="add share column" kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:56.518+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=fbd43c14-a4e8-4077-b4cc-19a57a79f4ce, expireMs=1705533356518] grafana | logger=migrator t=2024-01-17T23:14:35.747238475Z level=info msg="Migration successfully executed" id="add share column" duration=6.797334ms kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:36 policy-pap | [2024-01-17T23:15:56.617+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=c14afc04-dda2-444d-acd0-3073b0ca56f2, expireMs=1705533356616] grafana | logger=migrator t=2024-01-17T23:14:35.752787838Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:36 grafana | logger=migrator t=2024-01-17T23:14:35.752998602Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=210.614µs kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) policy-pap | [2024-01-17T23:15:58.637+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 grafana | logger=migrator t=2024-01-17T23:14:35.757306594Z level=info msg="Executing migration" id="create file table" kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) policy-pap | [2024-01-17T23:15:58.639+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.758146228Z level=info msg="Migration successfully executed" id="create file table" duration=838.104µs policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.762459021Z level=info msg="Executing migration" id="file table idx: path natural pk" policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,198] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) grafana | logger=migrator t=2024-01-17T23:14:35.764719979Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.261108ms policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,205] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.768233898Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,206] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.769916376Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.682178ms policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.774131597Z level=info msg="Executing migration" id="create file_meta table" policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.774834339Z level=info msg="Migration successfully executed" id="create file_meta table" duration=702.982µs policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1701242314260900u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.778300017Z level=info msg="Executing migration" id="file table idx: path key" policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.779470167Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.16972ms policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.787494542Z level=info msg="Executing migration" id="set path collation in file table" policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.787642064Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=145.672µs policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.794149564Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.794270295Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=121.242µs policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 grafana | logger=migrator t=2024-01-17T23:14:35.800265046Z level=info msg="Executing migration" id="managed permissions migration" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 grafana | logger=migrator t=2024-01-17T23:14:35.800772414Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=507.528µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 grafana | logger=migrator t=2024-01-17T23:14:35.8040477Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1701242314261000u 1 2024-01-17 23:14:37 grafana | logger=migrator t=2024-01-17T23:14:35.804353855Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=306.185µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1701242314261100u 1 2024-01-17 23:14:37 grafana | logger=migrator t=2024-01-17T23:14:35.807920495Z level=info msg="Executing migration" id="RBAC action name migrator" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1701242314261200u 1 2024-01-17 23:14:38 grafana | logger=migrator t=2024-01-17T23:14:35.809011223Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.090708ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1701242314261200u 1 2024-01-17 23:14:38 grafana | logger=migrator t=2024-01-17T23:14:35.814368963Z level=info msg="Executing migration" id="Add UID column to playlist" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1701242314261200u 1 2024-01-17 23:14:38 grafana | logger=migrator t=2024-01-17T23:14:35.824040996Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.671973ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1701242314261200u 1 2024-01-17 23:14:38 grafana | logger=migrator t=2024-01-17T23:14:35.868526444Z level=info msg="Executing migration" id="Update uid column values in playlist" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1701242314261300u 1 2024-01-17 23:14:38 grafana | logger=migrator t=2024-01-17T23:14:35.86886035Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=334.215µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1701242314261300u 1 2024-01-17 23:14:38 grafana | logger=migrator t=2024-01-17T23:14:35.874041817Z level=info msg="Executing migration" id="Add index for uid in playlist" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1701242314261300u 1 2024-01-17 23:14:38 grafana | logger=migrator t=2024-01-17T23:14:35.876281584Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.243017ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) policy-db-migrator | policyadmin: OK @ 1300 grafana | logger=migrator t=2024-01-17T23:14:35.88020048Z level=info msg="Executing migration" id="update group index for alert rules" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.880722199Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=522.339µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.883494755Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.88376826Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=273.045µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.888445389Z level=info msg="Executing migration" id="admin only folder/dashboard permission" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.889245622Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=800.253µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.892672639Z level=info msg="Executing migration" id="add action column to seed_assignment" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.904646791Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=11.974752ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.907944897Z level=info msg="Executing migration" id="add scope column to seed_assignment" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.916697003Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.753917ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.925594503Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:35.926386186Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=794.213µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:35.929463268Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.037720557Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=108.25755ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.041339448Z level=info msg="Executing migration" id="add unique index builtin_role_name back" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.042261384Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=922.426µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.049988273Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.051295006Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.305603ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.055564357Z level=info msg="Executing migration" id="add primary key to seed_assigment" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.090977613Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=35.414465ms kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.098078352Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.098338717Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=260.355µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.106052806Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.106516154Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=463.839µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.11044258Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.110836686Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=396.896µs kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.11462754Z level=info msg="Executing migration" id="create folder table" kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.115992393Z level=info msg="Migration successfully executed" id="create folder table" duration=1.365803ms kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.120145162Z level=info msg="Executing migration" id="Add index for parent_uid" kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) grafana | logger=migrator t=2024-01-17T23:14:36.121433715Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.289122ms kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=migrator t=2024-01-17T23:14:36.125685545Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" grafana | logger=migrator t=2024-01-17T23:14:36.127239652Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.551587ms grafana | logger=migrator t=2024-01-17T23:14:36.130756031Z level=info msg="Executing migration" id="Update folder title length" grafana | logger=migrator t=2024-01-17T23:14:36.130784561Z level=info msg="Migration successfully executed" id="Update folder title length" duration=27.32µs grafana | logger=migrator t=2024-01-17T23:14:36.136331405Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-01-17T23:14:36.138727105Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.39529ms grafana | logger=migrator t=2024-01-17T23:14:36.143480805Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" grafana | logger=migrator t=2024-01-17T23:14:36.144870408Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.389483ms grafana | logger=migrator t=2024-01-17T23:14:36.150191598Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" grafana | logger=migrator t=2024-01-17T23:14:36.151450259Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.257931ms grafana | logger=migrator t=2024-01-17T23:14:36.155954464Z level=info msg="Executing migration" id="create anon_device table" grafana | logger=migrator t=2024-01-17T23:14:36.15681292Z level=info msg="Migration successfully executed" id="create anon_device table" duration=858.316µs grafana | logger=migrator t=2024-01-17T23:14:36.163768626Z level=info msg="Executing migration" id="add unique index anon_device.device_id" grafana | logger=migrator t=2024-01-17T23:14:36.165900282Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.130216ms grafana | logger=migrator t=2024-01-17T23:14:36.170410077Z level=info msg="Executing migration" id="add index anon_device.updated_at" grafana | logger=migrator t=2024-01-17T23:14:36.171681399Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.271412ms grafana | logger=migrator t=2024-01-17T23:14:36.17529461Z level=info msg="Executing migration" id="create signing_key table" grafana | logger=migrator t=2024-01-17T23:14:36.176188464Z level=info msg="Migration successfully executed" id="create signing_key table" duration=893.274µs grafana | logger=migrator t=2024-01-17T23:14:36.179631303Z level=info msg="Executing migration" id="add unique index signing_key.key_id" grafana | logger=migrator t=2024-01-17T23:14:36.181022216Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.390563ms grafana | logger=migrator t=2024-01-17T23:14:36.184681077Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" grafana | logger=migrator t=2024-01-17T23:14:36.185946729Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.265062ms grafana | logger=migrator t=2024-01-17T23:14:36.190874322Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" grafana | logger=migrator t=2024-01-17T23:14:36.191238358Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=364.896µs grafana | logger=migrator t=2024-01-17T23:14:36.194408411Z level=info msg="Executing migration" id="Add folder_uid for dashboard" grafana | logger=migrator t=2024-01-17T23:14:36.206453453Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=12.045512ms grafana | logger=migrator t=2024-01-17T23:14:36.214980276Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" grafana | logger=migrator t=2024-01-17T23:14:36.21572711Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=748.104µs grafana | logger=migrator t=2024-01-17T23:14:36.219765457Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" grafana | logger=migrator t=2024-01-17T23:14:36.22167499Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.909373ms grafana | logger=migrator t=2024-01-17T23:14:36.225237179Z level=info msg="Executing migration" id="create sso_setting table" grafana | logger=migrator t=2024-01-17T23:14:36.226691763Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.453634ms grafana | logger=migrator t=2024-01-17T23:14:36.231934441Z level=info msg="Executing migration" id="copy kvstore migration status to each org" grafana | logger=migrator t=2024-01-17T23:14:36.232674085Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=739.063µs grafana | logger=migrator t=2024-01-17T23:14:36.237659388Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" grafana | logger=migrator t=2024-01-17T23:14:36.238108876Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=450.759µs grafana | logger=migrator t=2024-01-17T23:14:36.243565238Z level=info msg="migrations completed" performed=523 skipped=0 duration=10.934788896s grafana | logger=sqlstore t=2024-01-17T23:14:36.255100651Z level=info msg="Created default admin" user=admin grafana | logger=sqlstore t=2024-01-17T23:14:36.255422957Z level=info msg="Created default organization" grafana | logger=secrets t=2024-01-17T23:14:36.292901036Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 grafana | logger=plugin.store t=2024-01-17T23:14:36.317227365Z level=info msg="Loading plugins..." grafana | logger=local.finder t=2024-01-17T23:14:36.355896846Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled grafana | logger=plugin.store t=2024-01-17T23:14:36.355955307Z level=info msg="Plugins loaded" count=55 duration=38.728732ms grafana | logger=query_data t=2024-01-17T23:14:36.35976471Z level=info msg="Query Service initialization" grafana | logger=live.push_http t=2024-01-17T23:14:36.363401521Z level=info msg="Live Push Gateway initialization" grafana | logger=ngalert.migration t=2024-01-17T23:14:36.381666639Z level=info msg=Starting grafana | logger=ngalert.migration orgID=1 t=2024-01-17T23:14:36.382491722Z level=info msg="Migrating alerts for organisation" grafana | logger=ngalert.migration orgID=1 t=2024-01-17T23:14:36.382830838Z level=info msg="Alerts found to migrate" alerts=0 grafana | logger=ngalert.migration orgID=1 t=2024-01-17T23:14:36.383182063Z level=warn msg="No available receivers" kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,207] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,207] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-01-17T23:14:36.3859379Z level=info msg="Completed legacy migration" grafana | logger=infra.usagestats.collector t=2024-01-17T23:14:36.418561828Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 grafana | logger=provisioning.datasources t=2024-01-17T23:14:36.420591613Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz grafana | logger=provisioning.alerting t=2024-01-17T23:14:36.435056456Z level=info msg="starting to provision alerting" grafana | logger=provisioning.alerting t=2024-01-17T23:14:36.435072006Z level=info msg="finished to provision alerting" grafana | logger=ngalert.state.manager t=2024-01-17T23:14:36.435169678Z level=info msg="Warming state cache for startup" grafana | logger=ngalert.state.manager t=2024-01-17T23:14:36.43533951Z level=info msg="State cache has been initialized" states=0 duration=169.382µs grafana | logger=ngalert.scheduler t=2024-01-17T23:14:36.435354351Z level=info msg="Starting scheduler" tickInterval=10s grafana | logger=ticker t=2024-01-17T23:14:36.435393011Z level=info msg=starting first_tick=2024-01-17T23:14:40Z grafana | logger=grafanaStorageLogger t=2024-01-17T23:14:36.435425932Z level=info msg="Storage starting" grafana | logger=ngalert.multiorg.alertmanager t=2024-01-17T23:14:36.437544978Z level=info msg="Starting MultiOrg Alertmanager" grafana | logger=http.server t=2024-01-17T23:14:36.437555078Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= grafana | logger=sqlstore.transactions t=2024-01-17T23:14:36.456338183Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=grafana.update.checker t=2024-01-17T23:14:36.465436467Z level=info msg="Update check succeeded" duration=27.452692ms grafana | logger=plugins.update.checker t=2024-01-17T23:14:36.525266012Z level=info msg="Update check succeeded" duration=86.340522ms grafana | logger=sqlstore.transactions t=2024-01-17T23:14:36.588389253Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=sqlstore.transactions t=2024-01-17T23:14:36.600600018Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" grafana | logger=infra.usagestats t=2024-01-17T23:15:38.450695074Z level=info msg="Usage stats are ready to report" kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,208] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,208] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,211] INFO [Broker id=1] Finished LeaderAndIsr request in 5654ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) kafka | [2024-01-17 23:15:12,215] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=ZZVFVp_CTPq7ZebUsmWrBQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=TB_lmqBXRfuqVYs70rfOKA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-17 23:15:12,215] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,216] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,217] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,218] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,219] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,220] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,222] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,223] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,244] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,246] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) kafka | [2024-01-17 23:15:12,247] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) kafka | [2024-01-17 23:15:12,315] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4041dc88-5007-445a-911f-3e52b8d238d9 in Empty state. Created a new member id consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2-e780fe82-c0bb-4c48-83e1-8a127e9c91dc and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,315] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 093ff4e0-f365-4742-90a8-254a3129a143 in Empty state. Created a new member id consumer-093ff4e0-f365-4742-90a8-254a3129a143-3-7e258dba-fe5e-4ce8-aa02-aab1610b4ec3 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,317] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-752789c8-84e9-4da5-b6a8-9ea9c666d1f8 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,339] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-752789c8-84e9-4da5-b6a8-9ea9c666d1f8 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,339] INFO [GroupCoordinator 1]: Preparing to rebalance group 093ff4e0-f365-4742-90a8-254a3129a143 in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-093ff4e0-f365-4742-90a8-254a3129a143-3-7e258dba-fe5e-4ce8-aa02-aab1610b4ec3 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:12,339] INFO [GroupCoordinator 1]: Preparing to rebalance group 4041dc88-5007-445a-911f-3e52b8d238d9 in state PreparingRebalance with old generation 0 (__consumer_offsets-36) (reason: Adding new member consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2-e780fe82-c0bb-4c48-83e1-8a127e9c91dc with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:15,349] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:15,354] INFO [GroupCoordinator 1]: Stabilized group 093ff4e0-f365-4742-90a8-254a3129a143 generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:15,355] INFO [GroupCoordinator 1]: Stabilized group 4041dc88-5007-445a-911f-3e52b8d238d9 generation 1 (__consumer_offsets-36) with 1 members (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:15,371] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4041dc88-5007-445a-911f-3e52b8d238d9-2-e780fe82-c0bb-4c48-83e1-8a127e9c91dc for group 4041dc88-5007-445a-911f-3e52b8d238d9 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:15,376] INFO [GroupCoordinator 1]: Assignment received from leader consumer-093ff4e0-f365-4742-90a8-254a3129a143-3-7e258dba-fe5e-4ce8-aa02-aab1610b4ec3 for group 093ff4e0-f365-4742-90a8-254a3129a143 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) kafka | [2024-01-17 23:15:15,380] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-752789c8-84e9-4da5-b6a8-9ea9c666d1f8 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) ++ echo 'Tearing down containers...' Tearing down containers... ++ docker-compose down -v --remove-orphans Stopping policy-apex-pdp ... Stopping policy-pap ... Stopping grafana ... Stopping kafka ... Stopping policy-api ... Stopping mariadb ... Stopping compose_zookeeper_1 ... Stopping prometheus ... Stopping simulator ... Stopping grafana ... done Stopping prometheus ... done Stopping policy-apex-pdp ... done Stopping simulator ... done Stopping policy-pap ... done Stopping mariadb ... done Stopping kafka ... done Stopping compose_zookeeper_1 ... done Stopping policy-api ... done Removing policy-apex-pdp ... Removing policy-pap ... Removing grafana ... Removing kafka ... Removing policy-api ... Removing policy-db-migrator ... Removing mariadb ... Removing compose_zookeeper_1 ... Removing prometheus ... Removing simulator ... Removing grafana ... done Removing policy-api ... done Removing policy-db-migrator ... done Removing simulator ... done Removing prometheus ... done Removing policy-apex-pdp ... done Removing mariadb ... done Removing policy-pap ... done Removing kafka ... done Removing compose_zookeeper_1 ... done Removing network compose_default ++ cd /w/workspace/policy-pap-master-project-csit-pap + load_set + _setopts=hxB ++ echo braceexpand:hashall:interactive-comments:xtrace ++ tr : ' ' + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o braceexpand + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o hashall + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o interactive-comments + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') + set +o xtrace ++ echo hxB ++ sed 's/./& /g' + for i in $(echo "$_setopts" | sed 's/./& /g') + set +h + for i in $(echo "$_setopts" | sed 's/./& /g') + set +x + [[ -n /tmp/tmp.86XhMUNijP ]] + rsync -av /tmp/tmp.86XhMUNijP/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap sending incremental file list ./ log.html output.xml report.html testplan.txt sent 910,628 bytes received 95 bytes 1,821,446.00 bytes/sec total size is 910,086 speedup is 1.00 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models + exit 0 $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2130 killed; [ssh-agent] Stopped. Robot results publisher started... -Parsing output xml: Done! WARNING! Could not find file: **/log.html WARNING! Could not find file: **/report.html -Copying log files to build dir: Done! -Assigning results to build: Done! -Checking thresholds: Done! Done publishing Robot results. [PostBuildScript] - [INFO] Executing post build scripts. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11729757935559815524.sh ---> sysstat.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1745630386037835858.sh ---> package-listing.sh ++ facter osfamily ++ tr '[:upper:]' '[:lower:]' + OS_FAMILY=debian + workspace=/w/workspace/policy-pap-master-project-csit-pap + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/policy-pap-master-project-csit-pap ']' + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15542347597978687450.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ONT8 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ONT8/bin to PATH INFO: Running in OpenStack, capturing instance metadata [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9598751753018985353.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config11918184313179869096tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8560636373764010896.sh ---> create-netrc.sh [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins16958195737947213097.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ONT8 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-ONT8/bin to PATH [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins134652400046052874.sh ---> sudo-logs.sh Archiving 'sudo' log.. [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17690998474003572771.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ONT8 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. lftools 0.37.8 requires openstacksdk<1.5.0, but you have openstacksdk 2.1.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-ONT8/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins829609508667778766.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-ONT8 from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. python-openstackclient 6.4.0 requires openstacksdk>=2.0.0, but you have openstacksdk 1.4.0 which is incompatible. lf-activate-venv(): INFO: Adding /tmp/venv-ONT8/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1540 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-13055 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 14G 142G 9% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 842 24853 0 6471 30869 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:fe:fb:eb brd ff:ff:ff:ff:ff:ff inet 10.30.107.110/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 85917sec preferred_lft 85917sec inet6 fe80::f816:3eff:fefe:fbeb/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:43:60:b6:92 brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-13055) 01/17/24 _x86_64_ (8 CPU) 23:10:23 LINUX RESTART (8 CPU) 23:11:01 tps rtps wtps bread/s bwrtn/s 23:12:01 91.27 18.23 73.04 1043.83 25275.12 23:13:01 131.14 23.15 108.00 2778.20 31752.84 23:14:01 195.08 0.08 195.00 10.80 113333.11 23:15:01 347.94 11.63 336.31 777.47 68761.17 23:16:01 27.33 0.38 26.95 32.79 18974.07 23:17:01 15.33 0.03 15.30 2.80 19024.38 23:18:01 75.52 1.43 74.09 113.05 21775.70 Average: 126.23 7.85 118.38 679.85 42699.49 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:12:01 30090656 31712292 2848556 8.65 70572 1860720 1399400 4.12 858332 1693740 193832 23:13:01 28732736 31659404 4206476 12.77 104216 3089332 1585864 4.67 994660 2827620 1072112 23:14:01 25561864 31651912 7377348 22.40 139864 6072952 1542928 4.54 1037016 5808532 1227156 23:15:01 23660892 29984744 9278320 28.17 155752 6270420 8295200 24.41 2861596 5809328 472 23:16:01 23177868 29508132 9761344 29.63 157040 6273260 8882452 26.13 3358232 5787656 292 23:17:01 23219504 29551036 9719708 29.51 157136 6274212 8914736 26.23 3316316 5787320 996 23:18:01 25520576 31673072 7418636 22.52 160064 6111112 1469420 4.32 1239628 5627728 37836 Average: 25709157 30820085 7230055 21.95 134949 5136001 4584286 13.49 1952254 4763132 361814 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:12:01 ens3 62.66 43.08 996.90 7.42 0.00 0.00 0.00 0.00 23:12:01 lo 1.40 1.40 0.15 0.15 0.00 0.00 0.00 0.00 23:13:01 br-3afae52bdc80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:13:01 ens3 222.83 152.06 6287.57 15.41 0.00 0.00 0.00 0.00 23:13:01 lo 7.07 7.07 0.66 0.66 0.00 0.00 0.00 0.00 23:14:01 br-3afae52bdc80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:14:01 ens3 1124.86 551.49 27919.32 40.24 0.00 0.00 0.00 0.00 23:14:01 lo 6.33 6.33 0.64 0.64 0.00 0.00 0.00 0.00 23:15:01 br-3afae52bdc80 0.77 0.62 0.06 0.30 0.00 0.00 0.00 0.00 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:15:01 ens3 5.63 5.12 1.38 1.16 0.00 0.00 0.00 0.00 23:15:01 veth407fe71 27.05 25.26 11.57 16.28 0.00 0.00 0.00 0.00 23:16:01 br-3afae52bdc80 2.07 2.45 1.82 1.74 0.00 0.00 0.00 0.00 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:16:01 ens3 4.77 4.62 1.01 1.22 0.00 0.00 0.00 0.00 23:16:01 veth407fe71 26.53 22.31 8.44 24.17 0.00 0.00 0.00 0.00 23:17:01 br-3afae52bdc80 1.43 1.63 0.11 0.15 0.00 0.00 0.00 0.00 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:01 ens3 2.53 2.20 0.50 0.68 0.00 0.00 0.00 0.00 23:17:01 veth407fe71 0.38 0.43 0.59 0.03 0.00 0.00 0.00 0.00 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 ens3 56.76 40.66 71.61 32.04 0.00 0.00 0.00 0.00 23:18:01 lo 35.04 35.04 6.20 6.20 0.00 0.00 0.00 0.00 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: ens3 211.43 114.17 5039.75 14.02 0.00 0.00 0.00 0.00 Average: lo 4.44 4.44 0.84 0.84 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-13055) 01/17/24 _x86_64_ (8 CPU) 23:10:23 LINUX RESTART (8 CPU) 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:12:01 all 9.47 0.00 0.65 2.77 0.03 87.08 23:12:01 0 0.75 0.00 0.38 0.53 0.02 98.32 23:12:01 1 2.17 0.00 0.18 0.17 0.02 97.46 23:12:01 2 20.43 0.00 1.47 1.49 0.03 76.58 23:12:01 3 0.27 0.00 0.45 16.83 0.03 82.42 23:12:01 4 30.92 0.00 1.45 2.67 0.05 64.90 23:12:01 5 15.36 0.00 0.82 0.37 0.03 83.43 23:12:01 6 4.96 0.00 0.23 0.03 0.07 94.71 23:12:01 7 0.97 0.00 0.22 0.08 0.02 98.72 23:13:01 all 10.81 0.00 1.94 3.00 0.04 84.21 23:13:01 0 9.50 0.00 1.73 0.81 0.02 87.96 23:13:01 1 4.99 0.00 1.27 0.22 0.03 93.50 23:13:01 2 19.64 0.00 2.31 1.25 0.05 76.75 23:13:01 3 19.40 0.00 2.28 5.86 0.05 72.41 23:13:01 4 4.51 0.00 1.63 13.95 0.03 79.88 23:13:01 5 14.95 0.00 2.54 1.08 0.05 81.39 23:13:01 6 3.24 0.00 1.63 0.23 0.07 94.83 23:13:01 7 10.31 0.00 2.10 0.62 0.03 86.94 23:14:01 all 9.84 0.00 4.22 8.85 0.07 77.02 23:14:01 0 8.83 0.00 2.66 0.20 0.05 88.26 23:14:01 1 11.30 0.00 5.07 41.96 0.07 41.61 23:14:01 2 10.16 0.00 3.40 3.61 0.07 82.76 23:14:01 3 8.74 0.00 5.17 0.73 0.08 85.28 23:14:01 4 9.75 0.00 3.86 18.18 0.08 68.13 23:14:01 5 10.67 0.00 3.83 0.02 0.10 85.39 23:14:01 6 9.93 0.00 4.89 5.92 0.07 79.19 23:14:01 7 9.31 0.00 4.89 0.63 0.05 85.13 23:15:01 all 21.83 0.00 3.56 8.47 0.08 66.06 23:15:01 0 20.26 0.00 3.55 9.33 0.07 66.80 23:15:01 1 19.95 0.00 3.41 30.66 0.08 45.90 23:15:01 2 23.02 0.00 4.23 10.88 0.08 61.79 23:15:01 3 23.49 0.00 3.54 1.99 0.07 70.92 23:15:01 4 26.46 0.00 3.98 1.48 0.08 67.99 23:15:01 5 20.63 0.00 2.80 5.19 0.08 71.30 23:15:01 6 22.85 0.00 3.70 3.41 0.05 70.00 23:15:01 7 18.02 0.00 3.29 4.86 0.08 73.75 23:16:01 all 11.87 0.00 1.32 2.64 0.12 84.05 23:16:01 0 11.76 0.00 1.63 0.10 0.05 86.46 23:16:01 1 12.47 0.00 1.31 0.76 0.13 85.33 23:16:01 2 13.48 0.00 1.51 1.86 0.07 83.08 23:16:01 3 13.54 0.00 1.46 0.93 0.27 83.80 23:16:01 4 11.18 0.00 1.14 0.52 0.13 87.03 23:16:01 5 8.51 0.00 0.85 12.20 0.17 78.27 23:16:01 6 11.18 0.00 1.48 2.40 0.08 84.86 23:16:01 7 12.88 0.00 1.19 2.36 0.07 83.50 23:17:01 all 0.76 0.00 0.14 1.24 0.05 97.81 23:17:01 0 0.55 0.00 0.18 0.00 0.07 99.20 23:17:01 1 0.87 0.00 0.17 0.00 0.05 98.92 23:17:01 2 0.84 0.00 0.12 0.00 0.05 99.00 23:17:01 3 0.78 0.00 0.15 0.12 0.03 98.92 23:17:01 4 0.68 0.00 0.15 0.00 0.03 99.13 23:17:01 5 0.65 0.00 0.12 9.73 0.07 89.44 23:17:01 6 0.97 0.00 0.12 0.00 0.03 98.88 23:17:01 7 0.72 0.00 0.13 0.12 0.03 99.00 23:18:01 all 5.50 0.00 0.75 1.57 0.04 92.15 23:18:01 0 17.77 0.00 1.17 0.23 0.05 80.78 23:18:01 1 3.31 0.00 0.73 0.27 0.03 95.66 23:18:01 2 2.99 0.00 0.64 0.27 0.03 96.07 23:18:01 3 1.37 0.00 0.62 0.25 0.03 97.73 23:18:01 4 1.20 0.00 0.67 0.65 0.03 97.45 23:18:01 5 1.47 0.00 0.57 5.48 0.05 92.43 23:18:01 6 0.97 0.00 0.66 4.82 0.05 93.50 23:18:01 7 14.90 0.00 0.94 0.59 0.03 83.55 Average: all 10.00 0.00 1.79 4.06 0.06 84.09 Average: 0 9.90 0.00 1.61 1.59 0.05 86.85 Average: 1 7.82 0.00 1.72 10.44 0.06 79.96 Average: 2 12.92 0.00 1.95 2.76 0.06 82.32 Average: 3 9.63 0.00 1.95 3.82 0.08 84.52 Average: 4 12.10 0.00 1.83 5.32 0.06 80.69 Average: 5 10.31 0.00 1.64 4.87 0.08 83.10 Average: 6 7.72 0.00 1.81 2.39 0.06 88.02 Average: 7 9.57 0.00 1.81 1.32 0.05 87.25